On Graduated Optimization for Stochastic Non-Convex Problems

Abstract

The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade.Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. We characterize a family of non-convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an ε-approximate solution within O(1 / ε^2) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of “zero-order optimization", and devise a variant of our algorithm which converges at rate of O(d^2/ ε^4).

Cite

Text

Hazan et al. "On Graduated Optimization for Stochastic Non-Convex Problems." International Conference on Machine Learning, 2016.

Markdown

[Hazan et al. "On Graduated Optimization for Stochastic Non-Convex Problems." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/hazan2016icml-graduated/)

BibTeX

@inproceedings{hazan2016icml-graduated,
  title     = {{On Graduated Optimization for Stochastic Non-Convex Problems}},
  author    = {Hazan, Elad and Levy, Kfir Yehuda and Shalev-Shwartz, Shai},
  booktitle = {International Conference on Machine Learning},
  year      = {2016},
  pages     = {1833-1841},
  volume    = {48},
  url       = {https://mlanthology.org/icml/2016/hazan2016icml-graduated/}
}