MLPs are foundation models including ReLU, sigmoid, tanh activation functions, while KAN introduces trainable activation functions to revolutionize neural networks for AGI.
KAN Network questions fixed activation functions and updates them during training, challenging the assumption that activations need to remain constant throughout model training.
Collection
[
|
...
]