An Analytic Solution to Discrete Bayesian Reinforcement Learning

Abstract

Reinforcement learning (RL) was originally proposed as a framework to allow agents to learn in an online fashion as they interact with their environment. Existing RL algorithms come short of achieving this goal because the amount of exploration required is often too costly and/or too time consuming for online learning. As a result, RL is mostly used for offline learning in simulated environments. We propose a new algorithm, called BEETLE, for effective online learning that is computationally efficient while minimizing the amount of exploration. We take a Bayesian model-based approach, framing RL as a partially observable Markov decision process. Our two main contributions are the analytical derivation that the optimal value function is the upper envelope of a set of multivariate polynomials, and an efficient point-based value iteration algorithm that exploits this simple parameterization.

Cite

Text

Poupart et al. "An Analytic Solution to Discrete Bayesian Reinforcement Learning." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143932

Markdown

[Poupart et al. "An Analytic Solution to Discrete Bayesian Reinforcement Learning." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/poupart2006icml-analytic/) doi:10.1145/1143844.1143932

BibTeX

@inproceedings{poupart2006icml-analytic,
  title     = {{An Analytic Solution to Discrete Bayesian Reinforcement Learning}},
  author    = {Poupart, Pascal and Vlassis, Nikos and Hoey, Jesse and Regan, Kevin},
  booktitle = {International Conference on Machine Learning},
  year      = {2006},
  pages     = {697-704},
  doi       = {10.1145/1143844.1143932},
  url       = {https://mlanthology.org/icml/2006/poupart2006icml-analytic/}
}