Global Convergence of SGD for Logistic Loss on Two Layer Neural Nets
Abstract
In this note, we demonstrate a first-of-its-kind provable convergence of SGD to the global minima of appropriately regularized logistic empirical risk of depth $2$ nets -- for arbitrary data with any number of gates with adequately smooth and bounded activations, like sigmoid and tanh, and for a class of distributions from which the initial weight is sampled. We also prove an exponentially fast convergence rate for continuous time SGD that also applies to smooth unbounded activations like SoftPlus. Our key idea is to show that the logistic loss function on any size neural net can be Frobenius norm regularized by a width-independent parameter such that the regularized loss is a ``Villani function'' -- and thus be able to build on recent progress with analyzing SGD on such objectives.
Cite
Text
Gopalani et al. "Global Convergence of SGD for Logistic Loss on Two Layer Neural Nets." Transactions on Machine Learning Research, 2024.Markdown
[Gopalani et al. "Global Convergence of SGD for Logistic Loss on Two Layer Neural Nets." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/gopalani2024tmlr-global/)BibTeX
@article{gopalani2024tmlr-global,
title = {{Global Convergence of SGD for Logistic Loss on Two Layer Neural Nets}},
author = {Gopalani, Pulkit and Jha, Samyak and Mukherjee, Anirbit},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/gopalani2024tmlr-global/}
}