Refined Lower Bounds for Adversarial Bandits

Abstract

We provide new lower bounds on the regret that must be suffered by adversarial bandit algorithms. The new results show that recent upper bounds that either (a) hold with high-probability or (b) depend on the total loss of the best arm or (c) depend on the quadratic variation of the losses, are close to tight. Besides this we prove two impossibility results. First, the existence of a single arm that is optimal in every round cannot improve the regret in the worst case. Second, the regret cannot scale with the effective range of the losses. In contrast, both results are possible in the full-information setting.

Cite

Text

Gerchinovitz and Lattimore. "Refined Lower Bounds for Adversarial Bandits." Neural Information Processing Systems, 2016.

Markdown

[Gerchinovitz and Lattimore. "Refined Lower Bounds for Adversarial Bandits." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/gerchinovitz2016neurips-refined/)

BibTeX

@inproceedings{gerchinovitz2016neurips-refined,
  title     = {{Refined Lower Bounds for Adversarial Bandits}},
  author    = {Gerchinovitz, Sébastien and Lattimore, Tor},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {1198-1206},
  url       = {https://mlanthology.org/neurips/2016/gerchinovitz2016neurips-refined/}
}