Qualitatively Characterizing Neural Network Optimization Problems
Abstract
Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.
Cite
Text
Goodfellow and Vinyals. "Qualitatively Characterizing Neural Network Optimization Problems." International Conference on Learning Representations, 2015.Markdown
[Goodfellow and Vinyals. "Qualitatively Characterizing Neural Network Optimization Problems." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/goodfellow2015iclr-qualitatively/)BibTeX
@inproceedings{goodfellow2015iclr-qualitatively,
title = {{Qualitatively Characterizing Neural Network Optimization Problems}},
author = {Goodfellow, Ian J. and Vinyals, Oriol},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/goodfellow2015iclr-qualitatively/}
}