Fast Asymptotically Optimal Algorithms for Non-Parametric Stochastic Bandits

Abstract

We consider the problem of regret minimization in non-parametric stochastic bandits. When the rewards are known to be bounded from above, there exists asymptotically optimal algorithms, with asymptotic regret depending on an infimum of Kullback-Leibler divergences (KL). These algorithms are computationally expensive and require storing all past rewards, thus simpler but non-optimal algorithms are often used instead. We introduce several methods to approximate the infimum KL which reduce drastically the computational and memory costs of existing optimal algorithms, while keeping their regret guaranties. We apply our findings to design new variants of the MED and IMED algorithms, and demonstrate their interest with extensive numerical simulations.

Cite

Text

Baudry et al. "Fast Asymptotically Optimal Algorithms for Non-Parametric Stochastic Bandits." Neural Information Processing Systems, 2023.

Markdown

[Baudry et al. "Fast Asymptotically Optimal Algorithms for Non-Parametric Stochastic Bandits." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/baudry2023neurips-fast/)

BibTeX

@inproceedings{baudry2023neurips-fast,
  title     = {{Fast Asymptotically Optimal Algorithms for Non-Parametric Stochastic Bandits}},
  author    = {Baudry, Dorian and Pesquerel, Fabien and Degenne, Rémy and Maillard, Odalric-Ambrym},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/baudry2023neurips-fast/}
}