Monte Carlo Bayesian Reinforcement Learning

Abstract

Bayesian reinforcement learning (BRL) encodes prior knowledge of the world in a model and represents uncertainty in model parameters by maintaining a probability distribution over them. This paper presents Monte Carlo BRL (MC-BRL), a simple and general approach to BRL. MC-BRL samples a priori a finite set of hypotheses for the model parameter values and forms a discrete partially observable Markov decision process (POMDP) whose state space is a cross product of the state space for the reinforcement learning task and the sampled model parameter space. The POMDP does not require conjugate distributions for belief representation, as earlier works do, and can be solved relatively easily with point-based approximation algorithms. MC-BRL naturally handles both fully and partially observable worlds. Theoretical and experimental results show that the discrete POMDP approximates the underlying BRL task well with guaranteed performance.

Cite

Text

Wang et al. "Monte Carlo Bayesian Reinforcement Learning." International Conference on Machine Learning, 2012.

Markdown

[Wang et al. "Monte Carlo Bayesian Reinforcement Learning." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/wang2012icml-monte/)

BibTeX

@inproceedings{wang2012icml-monte,
  title     = {{Monte Carlo Bayesian Reinforcement Learning}},
  author    = {Wang, Yi and Won, Kok Sung and Hsu, David and Lee, Wee Sun},
  booktitle = {International Conference on Machine Learning},
  year      = {2012},
  url       = {https://mlanthology.org/icml/2012/wang2012icml-monte/}
}