Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret with Neither Communication nor Collisions
Abstract
We consider the cooperative multi-player version of the stochastic multi-armed bandit problem. We study the regime where the players cannot communicate but have access to shared randomness. In prior work by the first two authors, a strategy for this regime was constructed for two players and three arms, with regret $\tilde{O}(\sqrt{T})$, and with no collisions at all between the players (with very high probability). In this paper we show that these properties (near-optimal regret and no collisions at all) are achievable for any number of players and arms. At a high level, the previous strategy heavily relied on a 2-dimensional geometric intuition that was difficult to generalize in higher dimensions, while here we take a more combinatorial route to build the new strategy.
Cite
Text
Bubeck et al. "Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret with Neither Communication nor Collisions." Conference on Learning Theory, 2021.Markdown
[Bubeck et al. "Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret with Neither Communication nor Collisions." Conference on Learning Theory, 2021.](https://mlanthology.org/colt/2021/bubeck2021colt-cooperative/)BibTeX
@inproceedings{bubeck2021colt-cooperative,
title = {{Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret with Neither Communication nor Collisions}},
author = {Bubeck, Sebastien and Budzinski, Thomas and Sellke, Mark},
booktitle = {Conference on Learning Theory},
year = {2021},
pages = {821-822},
volume = {134},
url = {https://mlanthology.org/colt/2021/bubeck2021colt-cooperative/}
}