Mean-Field Analysis of Polynomial-Width Two-Layer Neural Network Beyond Finite Time Horizon
Abstract
We study the approximation gap between the dynamics of a polynomial-width neural network and its infinite-width counterpart, both trained using projected gradient descent in the mean-field scaling regime. We demonstrate how to tightly bound this approximation gap through a differential equation governed by the mean-field dynamics. A key factor influencing the growth of this ODE is the local Hessian of each particle, defined as the derivative of the particle’s velocity in the mean- field dynamics with respect to its position. We apply our results to the canonical feature learning problem of estimating a well-specified single-index model; we permit the information exponent to be arbitrarily large, leading to convergence times that grow polynomially in the ambient dimension d. We show that, due to a certain "self-concordance" property in these problems - where the local Hessian of a particle is bounded by a constant times the particle’s velocity - polynomially many neurons are sufficient to closely approximate the mean-field dynamics throughout training.
Cite
Text
Glasgow et al. "Mean-Field Analysis of Polynomial-Width Two-Layer Neural Network Beyond Finite Time Horizon." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.Markdown
[Glasgow et al. "Mean-Field Analysis of Polynomial-Width Two-Layer Neural Network Beyond Finite Time Horizon." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.](https://mlanthology.org/colt/2025/glasgow2025colt-meanfield/)BibTeX
@inproceedings{glasgow2025colt-meanfield,
title = {{Mean-Field Analysis of Polynomial-Width Two-Layer Neural Network Beyond Finite Time Horizon}},
author = {Glasgow, Margalit and Wu, Denny and Bruna, Joan},
booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory},
year = {2025},
pages = {2461-2539},
volume = {291},
url = {https://mlanthology.org/colt/2025/glasgow2025colt-meanfield/}
}