Lower Bounds on the Sample Complexity of Exploration in the Multi-Armed Bandit Problem

Abstract

We consider the Multi-armed bandit problem under the PAC (“probably approximately correct”) model. It was shown by Even-Dar et al. [5] that given n arms, it suffices to play the arms a total of $O\big(({n}/{\epsilon^2})\log ({1}/{\delta})\big)$ times to find an ε -optimal arm with probability of at least 1- δ . Our contribution is a matching lower bound that holds for any sampling policy. We also generalize the lower bound to a Bayesian setting, and to the case where the statistics of the arms are known but the identities of the arms are not.

Cite

Text

Mannor and Tsitsiklis. "Lower Bounds on the Sample Complexity of Exploration in the Multi-Armed Bandit Problem." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_31

Markdown

[Mannor and Tsitsiklis. "Lower Bounds on the Sample Complexity of Exploration in the Multi-Armed Bandit Problem." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/mannor2003colt-lower/) doi:10.1007/978-3-540-45167-9_31

BibTeX

@inproceedings{mannor2003colt-lower,
  title     = {{Lower Bounds on the Sample Complexity of Exploration in the Multi-Armed Bandit Problem}},
  author    = {Mannor, Shie and Tsitsiklis, John N.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2003},
  pages     = {418-432},
  doi       = {10.1007/978-3-540-45167-9_31},
  url       = {https://mlanthology.org/colt/2003/mannor2003colt-lower/}
}