MOTS: Minimax Optimal Thompson Sampling
Abstract
Thompson sampling is one of the most widely used algorithms in many online decision problems due to its simplicity for implementation and superior empirical performance over other state-of-the-art methods. Despite its popularity and empirical success, it has remained an open problem whether Thompson sampling can achieve the minimax optimal regret O(\sqrt{TK}) for K-armed bandit problems, where T is the total time horizon. In this paper we fill this long open gap by proposing a new Thompson sampling algorithm called MOTS that adaptively truncates the sampling result of the chosen arm at each time step. We prove that this simple variant of Thompson sampling achieves the minimax optimal regret bound O(\sqrt{TK}) for finite time horizon T and also the asymptotic optimal regret bound when $T$ grows to infinity as well. This is the first time that the minimax optimality of multi-armed bandit problems has been attained by Thompson sampling type of algorithms.
Cite
Text
Jin et al. "MOTS: Minimax Optimal Thompson Sampling." International Conference on Machine Learning, 2021.Markdown
[Jin et al. "MOTS: Minimax Optimal Thompson Sampling." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/jin2021icml-mots/)BibTeX
@inproceedings{jin2021icml-mots,
title = {{MOTS: Minimax Optimal Thompson Sampling}},
author = {Jin, Tianyuan and Xu, Pan and Shi, Jieming and Xiao, Xiaokui and Gu, Quanquan},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {5074-5083},
volume = {139},
url = {https://mlanthology.org/icml/2021/jin2021icml-mots/}
}