Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits
Abstract
Modifying the reward-biased maximum likelihood method originally proposed in the adaptive control literature, we propose novel learning algorithms to handle the explore-exploit trade-off in linear bandits problems as well as generalized linear bandits problems. We develop novel index policies that we prove achieve order-optimality, and show that they achieve empirical performance competitive with the state-of-the-art benchmark methods in extensive experiments. The new policies achieve this with low computation time per pull for linear bandits, and thereby resulting in both favorable regret as well as computational efficiency.
Cite
Text
Hung et al. "Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I9.16961Markdown
[Hung et al. "Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/hung2021aaai-reward/) doi:10.1609/AAAI.V35I9.16961BibTeX
@inproceedings{hung2021aaai-reward,
title = {{Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits}},
author = {Hung, Yu-Heng and Hsieh, Ping-Chun and Liu, Xi and Kumar, P. R.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {7874-7882},
doi = {10.1609/AAAI.V35I9.16961},
url = {https://mlanthology.org/aaai/2021/hung2021aaai-reward/}
}