Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem
Abstract
We consider the problem of \textit{best arm identification} with a \textit{fixed budget $T$}, in the $K$-armed stochastic bandit setting, with arms distribution defined on $[0,1]$. We prove that any bandit strategy, for at least one bandit problem characterized by a complexity $H$, will misidentify the best arm with probability lower bounded by $\exp\Big(-\frac{T}{\log(K)H}\Big),$ where $H$ is the sum for all sub-optimal arms of the inverse of the squared gaps. Our result disproves formally the general belief - coming from results in the fixed confidence setting - that there must exist an algorithm for this problem whose probability of error is upper bounded by $\exp(-T/H)$. This also proves that some existing strategies based on the Successive Rejection of the arms are optimal - closing therefore the current gap between upper and lower bounds for the fixed budget best arm identification problem.
Cite
Text
Carpentier and Locatelli. "Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem." Annual Conference on Computational Learning Theory, 2016.Markdown
[Carpentier and Locatelli. "Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem." Annual Conference on Computational Learning Theory, 2016.](https://mlanthology.org/colt/2016/carpentier2016colt-tight/)BibTeX
@inproceedings{carpentier2016colt-tight,
title = {{Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem}},
author = {Carpentier, Alexandra and Locatelli, Andrea},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2016},
pages = {590-604},
url = {https://mlanthology.org/colt/2016/carpentier2016colt-tight/}
}