Faster Perturbed Stochastic Gradient Methods for Finding Local Minima

Abstract

Escaping from saddle points and finding local minimum is a central problem in nonconvex optimization. Perturbed gradient methods are perhaps the simplest approach for this problem. However, to find $(\epsilon, \sqrt{\epsilon})$-approximate local minima, the existing best stochastic gradient complexity for this type of algorithms is $\tilde O(\epsilon^{-3.5})$, which is not optimal. In this paper, we propose LENA (Last stEp shriNkAge), a faster perturbed stochastic gradient framework for finding local minima. We show that LENA with stochastic gradient estimators such as SARAH/SPIDER and STORM can find $(\epsilon, \epsilon_{H})$-approximate local minima within $\tilde O(\epsilon^{-3} + \epsilon_{H}^{-6})$ stochastic gradient evaluations (or $\tilde O(\epsilon^{-3})$ when $\epsilon_H = \sqrt{\epsilon}$). The core idea of our framework is a step-size shrinkage scheme to control the average movement of the iterates, which leads to faster convergence to the local minima.

Cite

Text

Chen et al. "Faster Perturbed Stochastic Gradient Methods for Finding Local Minima." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.

Markdown

[Chen et al. "Faster Perturbed Stochastic Gradient Methods for Finding Local Minima." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.](https://mlanthology.org/alt/2022/chen2022alt-faster/)

BibTeX

@inproceedings{chen2022alt-faster,
  title     = {{Faster Perturbed Stochastic Gradient Methods for Finding Local Minima}},
  author    = {Chen, Zixiang and Zhou, Dongruo and Gu, Quanquan},
  booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory},
  year      = {2022},
  pages     = {176-204},
  volume    = {167},
  url       = {https://mlanthology.org/alt/2022/chen2022alt-faster/}
}