Adding One Neuron Can Eliminate All Bad Local Minima
Abstract
One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.
Cite
Text
Liang et al. "Adding One Neuron Can Eliminate All Bad Local Minima." Neural Information Processing Systems, 2018.Markdown
[Liang et al. "Adding One Neuron Can Eliminate All Bad Local Minima." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/liang2018neurips-adding/)BibTeX
@inproceedings{liang2018neurips-adding,
title = {{Adding One Neuron Can Eliminate All Bad Local Minima}},
author = {Liang, Shiyu and Sun, Ruoyu and Lee, Jason and Srikant, R.},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {4350-4360},
url = {https://mlanthology.org/neurips/2018/liang2018neurips-adding/}
}