Consistent Online Optimization: Convex and Submodular
Abstract
Modern online learning algorithms achieve low (sublinear) regret in a variety of diverse settings. These algorithms, however, update their solution at every time step. While these updates are computationally efficient, the very requirement of frequent updates makes the algorithms untenable in some practical applications. In this work we develop online learning algorithms that update a sublinear number of times. We give a meta algorithm based on non-homogeneous Poisson Processes that gives a smooth trade-off between regret and frequency of updates. Empirically, we show that in many cases, we can significantly reduce updates at a minimal increase in regret.
Cite
Text
Jaghargh et al. "Consistent Online Optimization: Convex and Submodular." Artificial Intelligence and Statistics, 2019.Markdown
[Jaghargh et al. "Consistent Online Optimization: Convex and Submodular." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/jaghargh2019aistats-consistent/)BibTeX
@inproceedings{jaghargh2019aistats-consistent,
title = {{Consistent Online Optimization: Convex and Submodular}},
author = {Jaghargh, Mohammad Reza Karimi and Krause, Andreas and Lattanzi, Silvio and Vassilvtiskii, Sergei},
booktitle = {Artificial Intelligence and Statistics},
year = {2019},
pages = {2241-2250},
volume = {89},
url = {https://mlanthology.org/aistats/2019/jaghargh2019aistats-consistent/}
}