An Asymptotically Optimal Policy for Finite Support Models in the Multiarmed Bandit Problem

Abstract

In the multiarmed bandit problem the dilemma between exploration and exploitation in reinforcement learning is expressed as a model of a gambler playing a slot machine with multiple arms. A policy chooses an arm in each round so as to minimize the number of times that arms with suboptimal expected rewards are pulled. We propose the minimum empirical divergence (MED) policy and derive an upper bound on the finite-time regret which meets the asymptotic bound for the case of finite support models. In a setting similar to ours, Burnetas and Katehakis have already proposed an asymptotically optimal policy. However, we do not assume any knowledge of the support except for its upper and lower bounds. Furthermore, the criterion for choosing an arm, minimum empirical divergence, can be computed easily by a convex optimization technique. We confirm by simulations that the MED policy demonstrates good performance in finite time in comparison to other currently popular policies.

Cite

Text

Honda and Takemura. "An Asymptotically Optimal Policy for Finite Support Models in the Multiarmed Bandit Problem." Machine Learning, 2011. doi:10.1007/S10994-011-5257-4

Markdown

[Honda and Takemura. "An Asymptotically Optimal Policy for Finite Support Models in the Multiarmed Bandit Problem." Machine Learning, 2011.](https://mlanthology.org/mlj/2011/honda2011mlj-asymptotically/) doi:10.1007/S10994-011-5257-4

BibTeX

@article{honda2011mlj-asymptotically,
  title     = {{An Asymptotically Optimal Policy for Finite Support Models in the Multiarmed Bandit Problem}},
  author    = {Honda, Junya and Takemura, Akimichi},
  journal   = {Machine Learning},
  year      = {2011},
  pages     = {361-391},
  doi       = {10.1007/S10994-011-5257-4},
  volume    = {85},
  url       = {https://mlanthology.org/mlj/2011/honda2011mlj-asymptotically/}
}