Online Optimization : Competing with Dynamic Comparators

Abstract

Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees. A complementary direction is to develop prediction methods that perform well against complex benchmarks. In this paper, we address these two directions together. We present a fully adaptive method that competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost functions and comparators. Notably, the regret bound adapts to the smaller complexity measure in the problem environment. Finally, we apply our results to drifting zero-sum, two-player games where both players achieve no regret guarantees against best sequences of actions in hindsight.

Cite

Text

Jadbabaie et al. "Online Optimization : Competing with Dynamic Comparators." International Conference on Artificial Intelligence and Statistics, 2015.

Markdown

[Jadbabaie et al. "Online Optimization : Competing with Dynamic Comparators." International Conference on Artificial Intelligence and Statistics, 2015.](https://mlanthology.org/aistats/2015/jadbabaie2015aistats-online/)

BibTeX

@inproceedings{jadbabaie2015aistats-online,
  title     = {{Online Optimization : Competing with Dynamic Comparators}},
  author    = {Jadbabaie, Ali and Rakhlin, Alexander and Shahrampour, Shahin and Sridharan, Karthik},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2015},
  url       = {https://mlanthology.org/aistats/2015/jadbabaie2015aistats-online/}
}