Affine-Invariant Online Optimization and the Low-Rank Experts Problem
Abstract
We present a new affine-invariant optimization algorithm called Online Lazy Newton. The regret of Online Lazy Newton is independent of conditioning: the algorithm's performance depends on the best possible preconditioning of the problem in retrospect and on its \emph{intrinsic} dimensionality. As an application, we show how Online Lazy Newton can be used to achieve an optimal regret of order $\sqrt{rT}$ for the low-rank experts problem, improving by a $\sqrt{r}$ factor over the previously best known bound and resolving an open problem posed by Hazan et al (2016).
Cite
Text
Koren and Livni. "Affine-Invariant Online Optimization and the Low-Rank Experts Problem." Neural Information Processing Systems, 2017.Markdown
[Koren and Livni. "Affine-Invariant Online Optimization and the Low-Rank Experts Problem." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/koren2017neurips-affineinvariant/)BibTeX
@inproceedings{koren2017neurips-affineinvariant,
title = {{Affine-Invariant Online Optimization and the Low-Rank Experts Problem}},
author = {Koren, Tomer and Livni, Roi},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {4747-4755},
url = {https://mlanthology.org/neurips/2017/koren2017neurips-affineinvariant/}
}