Thompson Sampling Is Asymptotically Optimal in General Environments
Abstract
We discuss a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. These environments can be non-Markov, non-ergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.
Cite
Text
Leike et al. "Thompson Sampling Is Asymptotically Optimal in General Environments." Conference on Uncertainty in Artificial Intelligence, 2016.Markdown
[Leike et al. "Thompson Sampling Is Asymptotically Optimal in General Environments." Conference on Uncertainty in Artificial Intelligence, 2016.](https://mlanthology.org/uai/2016/leike2016uai-thompson/)BibTeX
@inproceedings{leike2016uai-thompson,
title = {{Thompson Sampling Is Asymptotically Optimal in General Environments}},
author = {Leike, Jan and Lattimore, Tor and Orseau, Laurent and Hutter, Marcus},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2016},
url = {https://mlanthology.org/uai/2016/leike2016uai-thompson/}
}