Parallelizing Thompson Sampling

Abstract

How can we make use of information parallelism in online decision-making problems while efficiently balancing the exploration-exploitation trade-off? In this paper, we introduce a batch Thompson Sampling framework for two canonical online decision-making problems with partial feedback, namely, stochastic multi-arm bandit and linear contextual bandit. Over a time horizon $T$, our batch Thompson Sampling policy achieves the same (asymptotic) regret bound of a fully sequential one while carrying out only $O(\log T)$ batch queries. To achieve this exponential reduction, i.e., reducing the number of interactions from $T$ to $O(\log T)$, our batch policy dynamically determines the duration of each batch in order to balance the exploration-exploitation trade-off. We also demonstrate experimentally that dynamic batch allocation outperforms natural baselines.

Cite

Text

Karbasi et al. "Parallelizing Thompson Sampling." Neural Information Processing Systems, 2021.

Markdown

[Karbasi et al. "Parallelizing Thompson Sampling." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/karbasi2021neurips-parallelizing/)

BibTeX

@inproceedings{karbasi2021neurips-parallelizing,
  title     = {{Parallelizing Thompson Sampling}},
  author    = {Karbasi, Amin and Mirrokni, Vahab and Shadravan, Mohammad},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/karbasi2021neurips-parallelizing/}
}