Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization
Abstract
We give novel algorithms for stochastic strongly-convex optimization in the gradient oracle model which return a $O(\frac{1}{T})$-approximate solution after $T$ iterations. The first algorithm is deterministic, and achieves this rate via gradient updates and historical averaging. The second algorithm is randomized, and is based on pure gradient steps with a random step size.
Cite
Text
Hazan and Kale. "Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization." Journal of Machine Learning Research, 2014.Markdown
[Hazan and Kale. "Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization." Journal of Machine Learning Research, 2014.](https://mlanthology.org/jmlr/2014/hazan2014jmlr-beyond/)BibTeX
@article{hazan2014jmlr-beyond,
title = {{Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization}},
author = {Hazan, Elad and Kale, Satyen},
journal = {Journal of Machine Learning Research},
year = {2014},
pages = {2489-2512},
volume = {15},
url = {https://mlanthology.org/jmlr/2014/hazan2014jmlr-beyond/}
}