Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology
Abstract
Recent works have shown that gradient descent can find a global minimum for over-parameterized neural networks where the widths of all the hidden layers scale polynomially with N (N being the number of training samples). In this paper, we prove that, for deep networks, a single layer of width N following the input layer suffices to ensure a similar guarantee. In particular, all the remaining layers are allowed to have constant widths, and form a pyramidal topology. We show an application of our result to the widely used Xavier's initialization and obtain an over-parameterization requirement for the single wide layer of order N^2.
Cite
Text
Nguyen and Mondelli. "Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology." Neural Information Processing Systems, 2020.Markdown
[Nguyen and Mondelli. "Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/nguyen2020neurips-global/)BibTeX
@inproceedings{nguyen2020neurips-global,
title = {{Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology}},
author = {Nguyen, Quynh N and Mondelli, Marco},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/nguyen2020neurips-global/}
}