Exploration Through Reward Biasing: Reward-Biased Maximum Likelihood Estimation for Stochastic Multi-Armed Bandits

Abstract

Inspired by the Reward-Biased Maximum Likelihood Estimate method of adaptive control, we propose RBMLE – a novel family of learning algorithms for stochastic multi-armed bandits (SMABs). For a broad range of SMABs including both the parametric Exponential Family as well as the non-parametric sub-Gaussian/Exponential family, we show that RBMLE yields an index policy. To choose the bias-growth rate $\alpha(t)$ in RBMLE, we reveal the nontrivial interplay between $\alpha(t)$ and the regret bound that generally applies in both the Exponential Family as well as the sub-Gaussian/Exponential family bandits. To quantify the finite-time performance, we prove that RBMLE attains order-optimality by adaptively estimating the unknown constants in the expression of $\alpha(t)$ for Gaussian and sub-Gaussian bandits. Extensive experiments demonstrate that the proposed RBMLE achieves empirical regret performance competitive with the state-of-the-art methods, while being more computationally efficient and scalable in comparison to the best-performing ones among them.

Cite

Text

Liu et al. "Exploration Through Reward Biasing: Reward-Biased Maximum Likelihood Estimation for Stochastic Multi-Armed Bandits." International Conference on Machine Learning, 2020.

Markdown

[Liu et al. "Exploration Through Reward Biasing: Reward-Biased Maximum Likelihood Estimation for Stochastic Multi-Armed Bandits." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/liu2020icml-exploration/)

BibTeX

@inproceedings{liu2020icml-exploration,
  title     = {{Exploration Through Reward Biasing: Reward-Biased Maximum Likelihood Estimation for Stochastic Multi-Armed Bandits}},
  author    = {Liu, Xi and Hsieh, Ping-Chun and Hung, Yu Heng and Bhattacharya, Anirban and Kumar, P.},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {6248-6258},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/liu2020icml-exploration/}
}