Better Full-Matrix Regret via Parameter-Free Online Learning
Abstract
We provide online convex optimization algorithms that guarantee improved full-matrix regret bounds. These algorithms extend prior work in several ways. First, we seamlessly allow for the incorporation of constraints without requiring unknown oracle-tuning for any learning rate parameters. Second, we improve the regret of the full-matrix AdaGrad algorithm by suggesting a better learning rate value and showing how to tune the learning rate to this value on-the-fly. Third, all our bounds are obtained via a general framework for constructing regret bounds that depend on an arbitrary sequence of norms.
Cite
Text
Cutkosky. "Better Full-Matrix Regret via Parameter-Free Online Learning." Neural Information Processing Systems, 2020.Markdown
[Cutkosky. "Better Full-Matrix Regret via Parameter-Free Online Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/cutkosky2020neurips-better/)BibTeX
@inproceedings{cutkosky2020neurips-better,
title = {{Better Full-Matrix Regret via Parameter-Free Online Learning}},
author = {Cutkosky, Ashok},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/cutkosky2020neurips-better/}
}