Sequential Test for the Lowest Mean: From Thompson to Murphy Sampling
Abstract
Learning the minimum/maximum mean among a finite set of distributions is a fundamental sub-problem in planning, game tree search and reinforcement learning. We formalize this learning task as the problem of sequentially testing how the minimum mean among a finite set of distributions compares to a given threshold. We develop refined non-asymptotic lower bounds, which show that optimality mandates very different sampling behavior for a low vs high true minimum. We show that Thompson Sampling and the intuitive Lower Confidence Bounds policy each nail only one of these cases. We develop a novel approach that we call Murphy Sampling. Even though it entertains exclusively low true minima, we prove that MS is optimal for both possibilities. We then design advanced self-normalized deviation inequalities, fueling more aggressive stopping rules. We complement our theoretical guarantees by experiments showing that MS works best in practice.
Cite
Text
Kaufmann et al. "Sequential Test for the Lowest Mean: From Thompson to Murphy Sampling." Neural Information Processing Systems, 2018.Markdown
[Kaufmann et al. "Sequential Test for the Lowest Mean: From Thompson to Murphy Sampling." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/kaufmann2018neurips-sequential/)BibTeX
@inproceedings{kaufmann2018neurips-sequential,
title = {{Sequential Test for the Lowest Mean: From Thompson to Murphy Sampling}},
author = {Kaufmann, Emilie and Koolen, Wouter M. and Garivier, Aurélien},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {6332-6342},
url = {https://mlanthology.org/neurips/2018/kaufmann2018neurips-sequential/}
}