Faster Projection-Free Online Learning
Abstract
In many online learning problems the computational bottleneck for gradient-based methods is the projection operation. For this reason, in many problems the most efficient algorithms are based on the Frank-Wolfe method, which replaces projections by linear optimization. In the general case, however, online projection-free methods require more iterations than projection-based methods: the best known regret bound scales as $T^{3/4}$. Despite significant work on various variants of the Frank-Wolfe method, this bound has remained unchanged for a decade. In this paper we give an efficient projection-free algorithm that guarantees $T^{2/3}$ regret for general online convex optimization with smooth cost functions and one linear optimization computation per iteration. As opposed to previous Frank-Wolfe approaches, our algorithm is derived using the Follow-the-Perturbed-Leader method and is analyzed using an online primal-dual framework.
Cite
Text
Hazan and Minasyan. "Faster Projection-Free Online Learning." Conference on Learning Theory, 2020.Markdown
[Hazan and Minasyan. "Faster Projection-Free Online Learning." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/hazan2020colt-faster/)BibTeX
@inproceedings{hazan2020colt-faster,
title = {{Faster Projection-Free Online Learning}},
author = {Hazan, Elad and Minasyan, Edgar},
booktitle = {Conference on Learning Theory},
year = {2020},
pages = {1877-1893},
volume = {125},
url = {https://mlanthology.org/colt/2020/hazan2020colt-faster/}
}