On the Loss Landscape of a Class of Deep Neural Networks with No Bad Local Valleys

Abstract

We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero. This implies that these networks have no sub-optimal strict local minima.

Cite

Text

Nguyen et al. "On the Loss Landscape of a Class of Deep Neural Networks with No Bad Local Valleys." International Conference on Learning Representations, 2019.

Markdown

[Nguyen et al. "On the Loss Landscape of a Class of Deep Neural Networks with No Bad Local Valleys." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/nguyen2019iclr-loss/)

BibTeX

@inproceedings{nguyen2019iclr-loss,
  title     = {{On the Loss Landscape of a Class of Deep Neural Networks with No Bad Local Valleys}},
  author    = {Nguyen, Quynh and Mukkamala, Mahesh Chandra and Hein, Matthias},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/nguyen2019iclr-loss/}
}