Beyond the Regret Minimization Barrier: An Optimal Algorithm for Stochastic Strongly-Convex Optimization
Abstract
We give a novel algorithm for stochastic strongly-convex optimization in the gradient oracle model which returns an $O(\frac1T)$-approximate solution after $T$ gradient updates. This rate of convergence is optimal in the gradientoracle model. This improves upon the previously known best rate of $O(\frac{\log(T)}{T})$, which was obtained by applying an online strongly-convex optimization algorithm with regret $O(\log(T))$ to the batch setting. We complement this result by proving that any algorithm has expected regret of $\Omega(\log(T))$ in the online stochastic strongly-convex optimization setting. This lower bound holds even in the full-information setting which reveals more information to the algorithm than just gradients. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.
Cite
Text
Hazan and Kale. "Beyond the Regret Minimization Barrier: An Optimal Algorithm for Stochastic Strongly-Convex Optimization." Proceedings of the 24th Annual Conference on Learning Theory, 2011.Markdown
[Hazan and Kale. "Beyond the Regret Minimization Barrier: An Optimal Algorithm for Stochastic Strongly-Convex Optimization." Proceedings of the 24th Annual Conference on Learning Theory, 2011.](https://mlanthology.org/colt/2011/hazan2011colt-beyond/)BibTeX
@inproceedings{hazan2011colt-beyond,
title = {{Beyond the Regret Minimization Barrier: An Optimal Algorithm for Stochastic Strongly-Convex Optimization}},
author = {Hazan, Elad and Kale, Satyen},
booktitle = {Proceedings of the 24th Annual Conference on Learning Theory},
year = {2011},
pages = {421-436},
volume = {19},
url = {https://mlanthology.org/colt/2011/hazan2011colt-beyond/}
}