The Best of Both Worlds: Stochastic and Adversarial Bandits

Abstract

We present a new bandit algorithm, SAO (Stochastic and Adversarial Optimal) whose regret is (essentially) optimal both for adversarial rewards and for stochastic rewards. Specifically, SAO combines the \emphO(√\emphn) worst-case regret of Exp3 (Auer et al., 2002b) and the (poly)logarithmic regret of UCB1 (Auer et al., 2002a) for stochastic rewards. Adversarial rewards and stochastic rewards are the two main settings in the literature on multi-armed bandits (MAB). Prior work on MAB treats them separately, and does not attempt to jointly optimize for both. This result falls into the general agenda to design algorithms that combine the optimal worst-case performance with improved guarantees for “nice” problem instances.

Cite

Text

Bubeck and Slivkins. "The Best of Both Worlds: Stochastic and Adversarial Bandits." Proceedings of the 25th Annual Conference on Learning Theory, 2012.

Markdown

[Bubeck and Slivkins. "The Best of Both Worlds: Stochastic and Adversarial Bandits." Proceedings of the 25th Annual Conference on Learning Theory, 2012.](https://mlanthology.org/colt/2012/bubeck2012colt-best/)

BibTeX

@inproceedings{bubeck2012colt-best,
  title     = {{The Best of Both Worlds: Stochastic and Adversarial Bandits}},
  author    = {Bubeck, Sébastien and Slivkins, Aleksandrs},
  booktitle = {Proceedings of the 25th Annual Conference on Learning Theory},
  year      = {2012},
  pages     = {42.1-42.23},
  volume    = {23},
  url       = {https://mlanthology.org/colt/2012/bubeck2012colt-best/}
}