Parameter-Free Mirror Descent
Abstract
We develop a modified online mirror descent framework that is suitable for building adaptive and parameter-free algorithms in unbounded domains. We leverage this technique to develop the first unconstrained online linear optimization algorithm achieving an optimal dynamic regret bound, and we further demonstrate that natural strategies based on Follow-the-Regularized-Leader are unable to achieve similar results. We also apply our mirror descent framework to build new parameter-free implicit updates, as well as a simplified and improved unconstrained scale-free algorithm.
Cite
Text
Jacobsen and Cutkosky. "Parameter-Free Mirror Descent." Conference on Learning Theory, 2022.Markdown
[Jacobsen and Cutkosky. "Parameter-Free Mirror Descent." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/jacobsen2022colt-parameterfree/)BibTeX
@inproceedings{jacobsen2022colt-parameterfree,
title = {{Parameter-Free Mirror Descent}},
author = {Jacobsen, Andrew and Cutkosky, Ashok},
booktitle = {Conference on Learning Theory},
year = {2022},
pages = {4160-4211},
volume = {178},
url = {https://mlanthology.org/colt/2022/jacobsen2022colt-parameterfree/}
}