Towards Optimal Algorithms for Multi-Player Bandits Without Collision Sensing Information

Abstract

We propose a novel algorithm for multi-player multi-armed bandits without collision sensing information. Our algorithm circumvents two problems shared by all state-of-the-art algorithms: it does not need as an input a lower bound on the minimal expected reward of an arm, and its performance does not scale inversely proportionally to the minimal expected reward. We prove a theoretical regret upper bound to justify these claims. We complement our theoretical results with numerical experiments, showing that the proposed algorithm outperforms state-of-the-art in practice.

Cite

Text

Huang et al. "Towards Optimal Algorithms for Multi-Player Bandits Without Collision Sensing Information." Conference on Learning Theory, 2022.

Markdown

[Huang et al. "Towards Optimal Algorithms for Multi-Player Bandits Without Collision Sensing Information." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/huang2022colt-optimal/)

BibTeX

@inproceedings{huang2022colt-optimal,
  title     = {{Towards Optimal Algorithms for Multi-Player Bandits Without Collision Sensing Information}},
  author    = {Huang, Wei and Combes, Richard and Trinh, Cindy},
  booktitle = {Conference on Learning Theory},
  year      = {2022},
  pages     = {1990-2012},
  volume    = {178},
  url       = {https://mlanthology.org/colt/2022/huang2022colt-optimal/}
}