Regime Switching Bandits
Abstract
We study a multi-armed bandit problem where the rewards exhibit regime switching. Specifically, the distributions of the random rewards generated from all arms are modulated by a common underlying state modeled as a finite-state Markov chain. The agent does not observe the underlying state and has to learn the transition matrix and the reward distributions. We propose a learning algorithm for this problem, building on spectral method-of-moments estimations for hidden Markov models, belief error control in partially observable Markov decision processes and upper-confidence-bound methods for online learning. We also establish an upper bound $O(T^{2/3}\sqrt{\log T})$ for the proposed learning algorithm where $T$ is the learning horizon. Finally, we conduct proof-of-concept experiments to illustrate the performance of the learning algorithm.
Cite
Text
Zhou et al. "Regime Switching Bandits." Neural Information Processing Systems, 2021.Markdown
[Zhou et al. "Regime Switching Bandits." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zhou2021neurips-regime/)BibTeX
@inproceedings{zhou2021neurips-regime,
title = {{Regime Switching Bandits}},
author = {Zhou, Xiang and Xiong, Yi and Chen, Ningyuan and Gao, Xuefeng},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/zhou2021neurips-regime/}
}