Multi-Armed Bandit Algorithms and Empirical Evaluation
Abstract
The multi-armed bandit problem for a gambler is to decide which arm of a K -slot machine to pull to maximize his total reward in a series of trials. Many real-world learning and optimization problems can be modeled in this way. Several strategies or algorithms have been proposed as a solution to this problem in the last two decades, but, to our knowledge, there has been no common evaluation of these algorithms. This paper provides a preliminary empirical evaluation of several multi-armed bandit algorithms. It also describes and analyzes a new algorithm, Poker (Price Of Knowledge and Estimated Reward) whose performance compares favorably to that of other existing algorithms in several experiments. One remarkable outcome of our experiments is that the most naive approach, the ε -greedy strategy, proves to be often hard to beat.
Cite
Text
Vermorel and Mohri. "Multi-Armed Bandit Algorithms and Empirical Evaluation." European Conference on Machine Learning, 2005. doi:10.1007/11564096_42Markdown
[Vermorel and Mohri. "Multi-Armed Bandit Algorithms and Empirical Evaluation." European Conference on Machine Learning, 2005.](https://mlanthology.org/ecmlpkdd/2005/vermorel2005ecml-multiarmed/) doi:10.1007/11564096_42BibTeX
@inproceedings{vermorel2005ecml-multiarmed,
title = {{Multi-Armed Bandit Algorithms and Empirical Evaluation}},
author = {Vermorel, Joannès and Mohri, Mehryar},
booktitle = {European Conference on Machine Learning},
year = {2005},
pages = {437-448},
doi = {10.1007/11564096_42},
url = {https://mlanthology.org/ecmlpkdd/2005/vermorel2005ecml-multiarmed/}
}