Gradient Descent Can Take Exponential Time to Escape Saddle Points
Abstract
Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [Ge et al., 2015, Jin et al., 2017] is not slowed down by saddle points—it can find an approximate local minimizer in polynomial time. This result implies that GD is inherently slower than perturbed GD, and justifies the importance of adding perturbations for efficient non-convex optimization. While our focus is theoretical, we also present experiments that illustrate our theoretical findings.
Cite
Text
Du et al. "Gradient Descent Can Take Exponential Time to Escape Saddle Points." Neural Information Processing Systems, 2017.Markdown
[Du et al. "Gradient Descent Can Take Exponential Time to Escape Saddle Points." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/du2017neurips-gradient/)BibTeX
@inproceedings{du2017neurips-gradient,
title = {{Gradient Descent Can Take Exponential Time to Escape Saddle Points}},
author = {Du, Simon S and Jin, Chi and Lee, Jason and Jordan, Michael I and Singh, Aarti and Poczos, Barnabas},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {1067-1077},
url = {https://mlanthology.org/neurips/2017/du2017neurips-gradient/}
}