Open Problem: Parameter-Free and Scale-Free Online Algorithms
Abstract
Existing vanilla algorithms for online linear optimization have O((ηR(u) + 1/η) \sqrt{T}) regret with respect to any competitor u, where R(u) is a 1-strongly convex regularizer and η> 0 is a tuning parameter of the algorithm. For certain decision sets and regularizers, the so-called \emphparameter-free algorithms have \widetilde O(\sqrt{R}(u) T) regret with respect to any competitor u. Vanilla algorithm can achieve the same bound only for a fixed competitor u known ahead of time by setting η= 1/\sqrt{R}(u). A drawback of both vanilla and parameter-free algorithms is that they assume that the norm of the loss vectors is bounded by a constant known to the algorithm. There exist \emphscale-free algorithms that have O((ηR(u) + 1/η) \sqrt{T} \max_1 \le t \le T \norm\ell_t) regret with respect to any competitor u and for any sequence of loss vector \ell_1, …, \ell_T. Parameter-free analogue of scale-free algorithms have never been designed. Is is possible to design algorithms that are simultaneously \emphparameter-free and \emphscale-free?
Cite
Text
Orabona and Pál. "Open Problem: Parameter-Free and Scale-Free Online Algorithms." Annual Conference on Computational Learning Theory, 2016.Markdown
[Orabona and Pál. "Open Problem: Parameter-Free and Scale-Free Online Algorithms." Annual Conference on Computational Learning Theory, 2016.](https://mlanthology.org/colt/2016/orabona2016colt-open/)BibTeX
@inproceedings{orabona2016colt-open,
title = {{Open Problem: Parameter-Free and Scale-Free Online Algorithms}},
author = {Orabona, Francesco and Pál, Dávid},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2016},
pages = {1659-1664},
url = {https://mlanthology.org/colt/2016/orabona2016colt-open/}
}