On the Global Convergence of Gradient Descent for Over-Parameterized Models Using Optimal Transport

Abstract

Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.

Cite

Text

Chizat and Bach. "On the Global Convergence of Gradient Descent for Over-Parameterized Models Using Optimal Transport." Neural Information Processing Systems, 2018.

Markdown

[Chizat and Bach. "On the Global Convergence of Gradient Descent for Over-Parameterized Models Using Optimal Transport." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/chizat2018neurips-global/)

BibTeX

@inproceedings{chizat2018neurips-global,
  title     = {{On the Global Convergence of Gradient Descent for Over-Parameterized Models Using Optimal Transport}},
  author    = {Chizat, Lénaïc and Bach, Francis},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {3036-3046},
  url       = {https://mlanthology.org/neurips/2018/chizat2018neurips-global/}
}