Stochastic Cubic Regularization for Fast Nonconvex Optimization

Abstract

This paper proposes a stochastic variant of a classic algorithm---the cubic-regularized Newton method [Nesterov and Polyak]. The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only $\mathcal{\tilde{O}}(\epsilon^{-3.5})$ stochastic gradient and stochastic Hessian-vector product evaluations. The latter can be computed as efficiently as stochastic gradients. This improves upon the $\mathcal{\tilde{O}}(\epsilon^{-4})$ rate of stochastic gradient descent. Our rate matches the best-known result for finding local minima without requiring any delicate acceleration or variance-reduction techniques.

Cite

Text

Tripuraneni et al. "Stochastic Cubic Regularization for Fast Nonconvex Optimization." Neural Information Processing Systems, 2018.

Markdown

[Tripuraneni et al. "Stochastic Cubic Regularization for Fast Nonconvex Optimization." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/tripuraneni2018neurips-stochastic/)

BibTeX

@inproceedings{tripuraneni2018neurips-stochastic,
  title     = {{Stochastic Cubic Regularization for Fast Nonconvex Optimization}},
  author    = {Tripuraneni, Nilesh and Stern, Mitchell and Jin, Chi and Regier, Jeffrey and Jordan, Michael I},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {2899-2908},
  url       = {https://mlanthology.org/neurips/2018/tripuraneni2018neurips-stochastic/}
}