From Kolmogorov to Cauchy: Shallow XNet Surpasses KANs
Abstract
We study a shallow variant of XNet, a neural architecture whose activation functions are derived from the Cauchy integral formula. While prior work focused on deep variants, we show that even a single-layer XNet exhibits near-exponential approximation rates—exceeding the polynomial bounds of MLPs and spline-based networks such as Kolmogorov–Arnold Networks (KANs). Empirically, XNet reduces approximation error by over 600× on discontinuous functions, achieves up to 20,000× lower residuals in physics-informed PDEs, and improves policy accuracy and sample efficiency in PPO-based reinforcement learning—while maintaining comparable or better computational efficiency than KAN baselines. These results demonstrate that expressive approximation can stem from principled activation design rather than depth alone, offering a compact, theoretically grounded alternative for function approximation, scientific computing, and control.
Cite
Text
Li et al. "From Kolmogorov to Cauchy: Shallow XNet Surpasses KANs." Advances in Neural Information Processing Systems, 2025.Markdown
[Li et al. "From Kolmogorov to Cauchy: Shallow XNet Surpasses KANs." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/li2025neurips-kolmogorov/)BibTeX
@inproceedings{li2025neurips-kolmogorov,
title = {{From Kolmogorov to Cauchy: Shallow XNet Surpasses KANs}},
author = {Li, Xin and Zheng, Xiaotao and Xia, Zhihong Jeff},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/li2025neurips-kolmogorov/}
}