Bayesian Policy Optimization for Model Uncertainty

Abstract

Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a continuous Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. Our algorithm, Bayesian Policy Optimization, builds on recent policy optimization algorithms to learn a universal policy that navigates the exploration-exploitation trade-off to maximize the Bayesian value function. To address challenges from discretizing the continuous latent parameter space, we propose a new policy network architecture that encodes the belief distribution independently from the observable state. Our method significantly outperforms algorithms that address model uncertainty without explicitly reasoning about belief distributions and is competitive with state-of-the-art Partially Observable Markov Decision Process solvers.

Cite

Text

Lee et al. "Bayesian Policy Optimization for Model Uncertainty." International Conference on Learning Representations, 2019.

Markdown

[Lee et al. "Bayesian Policy Optimization for Model Uncertainty." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/lee2019iclr-bayesian/)

BibTeX

@inproceedings{lee2019iclr-bayesian,
  title     = {{Bayesian Policy Optimization for Model Uncertainty}},
  author    = {Lee, Gilwoo and Hou, Brian and Mandalika, Aditya and Lee, Jeongseok and Choudhury, Sanjiban and Srinivasa, Siddhartha S.},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/lee2019iclr-bayesian/}
}