Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits

Abstract

In this paper, we study the problem of estimating the mean values of all the arms uniformly well in the multi-armed bandit setting. If the variances of the arms were known, one could design an optimal sampling strategy by pulling the arms proportionally to their variances. However, since the distributions are not known in advance, we need to design adaptive sampling strategies to select an arm at each round based on the previous observed samples. We describe two strategies based on pulling the arms proportionally to an upper-bound on their variance and derive regret bounds for these strategies. We show that the performance of these allocation strategies depends not only on the variances of the arms but also on the full shape of their distribution.

Cite

Text

Carpentier et al. "Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits." International Conference on Algorithmic Learning Theory, 2011. doi:10.1007/978-3-642-24412-4_17

Markdown

[Carpentier et al. "Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits." International Conference on Algorithmic Learning Theory, 2011.](https://mlanthology.org/alt/2011/carpentier2011alt-upperconfidencebound/) doi:10.1007/978-3-642-24412-4_17

BibTeX

@inproceedings{carpentier2011alt-upperconfidencebound,
  title     = {{Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits}},
  author    = {Carpentier, Alexandra and Lazaric, Alessandro and Ghavamzadeh, Mohammad and Munos, Rémi and Auer, Peter},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2011},
  pages     = {189-203},
  doi       = {10.1007/978-3-642-24412-4_17},
  url       = {https://mlanthology.org/alt/2011/carpentier2011alt-upperconfidencebound/}
}