Adaptive Online Gradient Descent
Abstract
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates T and log T . Furthermore, we show strong optimality of the algorithm. between Finally, we provide an extension of our results to general norms.
Cite
Text
Bartlett et al. "Adaptive Online Gradient Descent." Neural Information Processing Systems, 2007.Markdown
[Bartlett et al. "Adaptive Online Gradient Descent." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/bartlett2007neurips-adaptive/)BibTeX
@inproceedings{bartlett2007neurips-adaptive,
title = {{Adaptive Online Gradient Descent}},
author = {Bartlett, Peter L. and Hazan, Elad and Rakhlin, Alexander},
booktitle = {Neural Information Processing Systems},
year = {2007},
pages = {65-72},
url = {https://mlanthology.org/neurips/2007/bartlett2007neurips-adaptive/}
}