A Theoretical Study of Neural Network Expressive Power via Manifold Topology
Abstract
A prevalent assumption regarding real-world data is that it lies on or close to a low-dimensional manifold. When deploying a neural network on data manifolds, the required size, i.e., the number of neurons of the network, heavily depends on the intricacy of the underlying latent manifold. While significant advancements have been made in understanding the geometric attributes of manifolds, it's essential to recognize that topology, too, is a fundamental characteristic of manifolds. In this study, we investigate network expressive power in terms of the latent data manifold. Integrating both topological and geometric facets of the data manifold, we present a size upper bound of ReLU neural networks.
Cite
Text
Yao et al. "A Theoretical Study of Neural Network Expressive Power via Manifold Topology." Transactions on Machine Learning Research, 2025.Markdown
[Yao et al. "A Theoretical Study of Neural Network Expressive Power via Manifold Topology." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/yao2025tmlr-theoretical/)BibTeX
@article{yao2025tmlr-theoretical,
title = {{A Theoretical Study of Neural Network Expressive Power via Manifold Topology}},
author = {Yao, Jiachen and Yi, Lingjie and Goswami, Mayank and Chen, Chao},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/yao2025tmlr-theoretical/}
}