An Optimal Algorithm for the Thresholding Bandit Problem
Abstract
We study a specific combinatorial pure exploration stochastic bandit problem where the learner aims at finding the set of arms whose means are above a given threshold, up to a given precision, and for a fixed time horizon. We propose a parameter-free algorithm based on an original heuristic, and prove that it is optimal for this problem by deriving matching upper and lower bounds. To the best of our knowledge, this is the first non-trivial pure exploration setting with fixed budget for which provably optimal strategies are constructed.
Cite
Text
Locatelli et al. "An Optimal Algorithm for the Thresholding Bandit Problem." International Conference on Machine Learning, 2016.Markdown
[Locatelli et al. "An Optimal Algorithm for the Thresholding Bandit Problem." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/locatelli2016icml-optimal/)BibTeX
@inproceedings{locatelli2016icml-optimal,
title = {{An Optimal Algorithm for the Thresholding Bandit Problem}},
author = {Locatelli, Andrea and Gutzeit, Maurilio and Carpentier, Alexandra},
booktitle = {International Conference on Machine Learning},
year = {2016},
pages = {1690-1698},
volume = {48},
url = {https://mlanthology.org/icml/2016/locatelli2016icml-optimal/}
}