Deviations of Stochastic Bandit Regret
Abstract
This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al. (2009) exhibit a policy such that with probability at least 1-1/ n , the regret of the policy is of order log n . They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. (2002). This work first answers an open question: it extends this negative result to any anytime policy. The second contribution of this paper is to design anytime robust policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms.
Cite
Text
Salomon and Audibert. "Deviations of Stochastic Bandit Regret." International Conference on Algorithmic Learning Theory, 2011. doi:10.1007/978-3-642-24412-4_15Markdown
[Salomon and Audibert. "Deviations of Stochastic Bandit Regret." International Conference on Algorithmic Learning Theory, 2011.](https://mlanthology.org/alt/2011/salomon2011alt-deviations/) doi:10.1007/978-3-642-24412-4_15BibTeX
@inproceedings{salomon2011alt-deviations,
title = {{Deviations of Stochastic Bandit Regret}},
author = {Salomon, Antoine and Audibert, Jean-Yves},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2011},
pages = {159-173},
doi = {10.1007/978-3-642-24412-4_15},
url = {https://mlanthology.org/alt/2011/salomon2011alt-deviations/}
}