Parameter-Free, Dynamic, and Strongly-Adaptive Online Learning
Abstract
We provide a new online learning algorithm that for the first time combines several disparate notions of adaptivity. First, our algorithm obtains a “parameter-free” regret bound that adapts to the norm of the comparator and the squared norm of the size of the gradients it observes. Second, it obtains a “strongly-adaptive” regret bound, so that for any given interval of length $N$, the regret over the interval is $\tilde O(\sqrt{N})$. Finally, our algorithm obtains an optimal “dynamic” regret bound: for any sequence of comparators with path-length $P$, our algorithm obtains regret $\tilde O(\sqrt{PN})$ over intervals of length $N$. Our primary technique for achieving these goals is a new method of combining constrained online learning regret bounds that does not rely on an expert meta-algorithm to aggregate learners.
Cite
Text
Cutkosky. "Parameter-Free, Dynamic, and Strongly-Adaptive Online Learning." International Conference on Machine Learning, 2020.Markdown
[Cutkosky. "Parameter-Free, Dynamic, and Strongly-Adaptive Online Learning." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/cutkosky2020icml-parameterfree/)BibTeX
@inproceedings{cutkosky2020icml-parameterfree,
title = {{Parameter-Free, Dynamic, and Strongly-Adaptive Online Learning}},
author = {Cutkosky, Ashok},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {2250-2259},
volume = {119},
url = {https://mlanthology.org/icml/2020/cutkosky2020icml-parameterfree/}
}