Top Qs
Timeline
Chat
Perspective
Kolmogorov-Arnold Networks
Type of artificial neural network architecture From Wikipedia, the free encyclopedia
Remove ads
Kolmogorov–Arnold Networks (KANs) are a type of artificial neural network architecture inspired by the Kolmogorov–Arnold representation theorem, also known as the superposition theorem. Unlike traditional multilayer perceptrons (MLPs), which rely on fixed activation functions and linear weights, KANs replace each weight with a learnable univariate function, often represented using splines.[1][2][3]
Remove ads
History
Summarize
Perspective
KANs (Kolmogorov-Arnold Networks) were proposed by Liu et al. (2024)[4] as a generalization of the Kolmogorov–Arnold representation theorem (KART), aiming to outperform MLPs in small-scale AI and scientific tasks. Before KANs, numerous studies explored KART’s connections to neural networks or used it as a basis for designing new network architectures.
In the 1980s and 1990s, early research applied KART to neural network design. Kůrková et al. (1992)[5], Hecht-Nielsen (1987)[6], and Nees (1994)[7] established theoretical foundations for multilayer networks based on KART. Igelnik et al. (2003)[8] introduced the Kolmogorov Spline Network using cubic splines to model complex functions. Sprecher (1996, 1997)[9][10] developed numerical methods for constructing network layers, while Nakamura et al. (1993)[11] designed activation functions with guaranteed approximation accuracy. These efforts bridged KART’s theoretical potential with practical neural network implementation.
KART has also been applied in other computational and theoretical domains. Coppejans (2004)[12] developed nonparametric regression estimators using B-splines, Bryant (2008)[13] applied it to high-dimensional image tasks, Liu (2015)[14] explored theoretical applications in optimal transport and image encryption, and more recently, Polar and Poluektov (2021)[15] used Urysohn operators for efficient KART construction, while Fakhoury et al. (2022)[16] introduced ExSpliNet, integrating KART with probabilistic trees and multivariate B-splines for enhanced function approximation.
Remove ads
Architecture
Summarize
Perspective
KANs are based on the Kolmogorov–Arnold representation theorem, which was linked to the 13th Hilbert problem.[17][18][19]
Given consisting of n variables, a multivariate continuous function can be represented as:
- (1)
This formulation contains two nested summations: an outer and an inner sum. The outer sum aggregates terms, each involving a function . The inner sum computes n terms for each q, where each term is a continuous function of the single variable . The inner continuous functions are universal, independent of , while the outer functions depend on the specific function being represented. The representation (1) holds for all multivariate functions . If is continuous, then the outer functions are continuous; if is discontinuous, then the corresponding are generally discontinuous, while the inner functions remain the same universal functions.[19]
Liu et al.[1] proposed the name KAN. A general KAN network consisting of L layers takes x to generate the output as:
- (3)
Here, is the function matrix of the l-th KAN layer or a set of pre-activations.
Let i denote the neuron of the l-th layer and j the neuron of the (l+1)-th layer. The activation function connects (l, i) to (l+1, j):
- (4)
where nl is the number of nodes of the l-th layer.
Thus, the function matrix can be represented as an matrix of activations:
Remove ads
Functions used in KAN
The choice of functional basis strongly influences the performance of KANs. Common function families include:
- B-splines: Provide locality, smoothness, and interpretability; they are the most widely used in current implementations.[3]
- RBFs: Capture localized features in data and are effective in approximating functions with non-linear or clustered structures.[3][20]
- Chebyshev polynomials: Offer efficient approximation with minimized error in the maximum norm, making them useful for stable function representation.[3][21]
- Rational functions: Useful for approximating functions with singularities or sharp variations, as they can model asymptotic behavior better than polynomials.[3][22]
- Fourier series: Capture periodic patterns effectively and are particularly useful in domains such as physics-informed machine learning.[3][23]
Usage
KANs are usually employed as drop-in replacements for MLP layers in modern neural architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Transformers. While KANs are designed for general purposes, scientists have developed and applied them to a variety of tasks:
- Science Machine Learning (SciML): Function fitting[1], partial differential equations (PDEs)[1][28][29] and physical/mathematical laws.[2]
- Continual learning: KANs better preserve previously learned information during incremental updates, avoiding catastrophic forgetting due to the locality of spline adjustments.[2][30]
Remove ads
Drawbacks of KAN
KANs can be computationally intensive and require a large number of parameters due to their use of polynomial functions to capture data.[33][34]
See also
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads