A Blackbox Approach to Best of Both Worlds in Bandits and Beyond

Abstract

Best-of-both-worlds algorithms for online learning which achieve near-optimal regret in both the adversarial and the stochastic regimes have received growing attention recently. Existing techniques often require careful adaptation to every new problem setup, including specialized potentials and careful tuning of algorithm parameters. Yet, in domains such as linear bandits, it is still unknown if there exists an algorithm that can obtain $O(\log(T))$ regret in the stochastic regime and $\tilde{O}(\sqrt{T})$ regret in the adversarial regime. In this work, we resolve this question positively and present a generally applicable reduction from best of both worlds to a wide family of follow-the-regularized-leader (FTRL) algorithms. We showcase the capability of this reduction by transforming existing algorithms that only achieve worst-case guarantees into new best-of-both-worlds algorithms in the setting of contextual bandits, graph bandits and tabular Markov decision processes.

Cite

Text

Dann et al. "A Blackbox Approach to Best of Both Worlds in Bandits and Beyond." Conference on Learning Theory, 2023.

Markdown

[Dann et al. "A Blackbox Approach to Best of Both Worlds in Bandits and Beyond." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/dann2023colt-blackbox/)

BibTeX

@inproceedings{dann2023colt-blackbox,
  title     = {{A Blackbox Approach to Best of Both Worlds in Bandits and Beyond}},
  author    = {Dann, Chris and Wei, Chen-Yu and Zimmert, Julian},
  booktitle = {Conference on Learning Theory},
  year      = {2023},
  pages     = {5503-5570},
  volume    = {195},
  url       = {https://mlanthology.org/colt/2023/dann2023colt-blackbox/}
}