Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search

Abstract

Bayesian planning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, planning optimally in the face of uncertainty is notoriously taxing, since the search space is enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach avoids expensive applications of Bayes rule within the search tree by sampling models from current beliefs, and furthermore performs this sampling in a lazy manner. This enables it to outperform previous Bayesian model-based reinforcement learning algorithms by a significant margin on several well-known benchmark problems. As we show, our approach can even work in problems with an in finite state space that lie qualitatively out of reach of almost all previous work in Bayesian exploration.

Cite

Text

Guez et al. "Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search." Journal of Artificial Intelligence Research, 2013. doi:10.1613/JAIR.4117

Markdown

[Guez et al. "Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search." Journal of Artificial Intelligence Research, 2013.](https://mlanthology.org/jair/2013/guez2013jair-scalable/) doi:10.1613/JAIR.4117

BibTeX

@article{guez2013jair-scalable,
  title     = {{Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search}},
  author    = {Guez, Arthur and Silver, David and Dayan, Peter},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2013},
  pages     = {841-883},
  doi       = {10.1613/JAIR.4117},
  volume    = {48},
  url       = {https://mlanthology.org/jair/2013/guez2013jair-scalable/}
}