Small Nonlinearities in Activation Functions Create Bad Local Minima in Neural Networks

Abstract

We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with "slightest" nonlinearity, the empirical risks have spurious local minima in most cases. Our results thus indicate that in general "no spurious local minim" is a property limited to deep linear networks, and insights obtained from linear networks may not be robust. Specifically, for ReLU(-like) networks we constructively prove that for almost all practical datasets there exist infinitely many local minima. We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum. Our results make the least restrictive assumptions relative to existing results on spurious local optima in neural networks. We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic.

Cite

Text

Yun et al. "Small Nonlinearities in Activation Functions Create Bad Local Minima in Neural Networks." International Conference on Learning Representations, 2019.

Markdown

[Yun et al. "Small Nonlinearities in Activation Functions Create Bad Local Minima in Neural Networks." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/yun2019iclr-small/)

BibTeX

@inproceedings{yun2019iclr-small,
  title     = {{Small Nonlinearities in Activation Functions Create Bad Local Minima in Neural Networks}},
  author    = {Yun, Chulhee and Sra, Suvrit and Jadbabaie, Ali},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/yun2019iclr-small/}
}