Constrained Policy Optimization via Bayesian World Models

Abstract

Improving sample-efficiency and safety are crucial challenges when deploying reinforcement learning in high-stakes real world applications. We propose LAMBDA, a novel model-based approach for policy optimization in safety critical tasks modeled via constrained Markov decision processes. Our approach utilizes Bayesian world models, and harnesses the resulting uncertainty to maximize optimistic upper bounds on the task objective, as well as pessimistic upper bounds on the safety constraints. We demonstrate LAMBDA's state of the art performance on the Safety-Gym benchmark suite in terms of sample efficiency and constraint violation.

Cite

Text

As et al. "Constrained Policy Optimization via Bayesian World Models." International Conference on Learning Representations, 2022.

Markdown

[As et al. "Constrained Policy Optimization via Bayesian World Models." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/as2022iclr-constrained/)

BibTeX

@inproceedings{as2022iclr-constrained,
  title     = {{Constrained Policy Optimization via Bayesian World Models}},
  author    = {As, Yarden and Usmanova, Ilnura and Curi, Sebastian and Krause, Andreas},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/as2022iclr-constrained/}
}