ABC Reinforcement Learning

Abstract

We introduce a simple, general framework for likelihood-free Bayesian reinforcement learning, through Approximate Bayesian Computation (ABC). The advantage is that we only require a prior distribution on a class of simulators. This is useful when a probabilistic model of the underlying process is too complex to formulate, but where detailed simulation models are available. ABC-RL allows the use of any Bayesian reinforcement learning technique in this case. It can be seen as an extension of simulation methods to both planning and inference. We experimentally demonstrate the potential of this approach in a comparison with LSPI. Finally, we introduce a theorem showing that ABC is sound.

Cite

Text

Dimitrakakis and Tziortziotis. "ABC Reinforcement Learning." International Conference on Machine Learning, 2013.

Markdown

[Dimitrakakis and Tziortziotis. "ABC Reinforcement Learning." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/dimitrakakis2013icml-abc/)

BibTeX

@inproceedings{dimitrakakis2013icml-abc,
  title     = {{ABC Reinforcement Learning}},
  author    = {Dimitrakakis, Christos and Tziortziotis, Nikolaos},
  booktitle = {International Conference on Machine Learning},
  year      = {2013},
  pages     = {684-692},
  volume    = {28},
  url       = {https://mlanthology.org/icml/2013/dimitrakakis2013icml-abc/}
}