Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization

Abstract

Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.

Cite

Text

Zhuang et al. "Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization." International Conference on Machine Learning, 2019.

Markdown

[Zhuang et al. "Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/zhuang2019icml-surrogate/)

BibTeX

@inproceedings{zhuang2019icml-surrogate,
  title     = {{Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization}},
  author    = {Zhuang, Zhenxun and Cutkosky, Ashok and Orabona, Francesco},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {7664-7672},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/zhuang2019icml-surrogate/}
}