Information Complexity in Bandit Subset Selection
Abstract
We consider the problem of efficiently exploring the arms of a stochastic bandit to identify the best subset. Under the PAC and the fixed-budget formulations, we derive improved bounds by using KL-divergence-based confidence intervals. Whereas the application of a similar idea in the regret setting has yielded bounds in terms of the KL-divergence between the arms, our bounds in the pure-exploration setting involve the Chernoff information between the arms. In addition to introducing this novel quantity to the bandits literature, we contribute a comparison between the “racing” and “smart sampling” strategies for pure-exploration problems, finding strong evidence in favor of the latter.
Cite
Text
Kaufmann and Kalyanakrishnan. "Information Complexity in Bandit Subset Selection." Annual Conference on Computational Learning Theory, 2013.Markdown
[Kaufmann and Kalyanakrishnan. "Information Complexity in Bandit Subset Selection." Annual Conference on Computational Learning Theory, 2013.](https://mlanthology.org/colt/2013/kaufmann2013colt-information/)BibTeX
@inproceedings{kaufmann2013colt-information,
title = {{Information Complexity in Bandit Subset Selection}},
author = {Kaufmann, Emilie and Kalyanakrishnan, Shivaram},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2013},
pages = {228-251},
url = {https://mlanthology.org/colt/2013/kaufmann2013colt-information/}
}