SafeDreamer: Safe Reinforcement Learning with World Models

Abstract

The deployment of Reinforcement Learning (RL) in real-world applications is constrained by its failure to satisfy safety criteria. Existing Safe Reinforcement Learning (SafeRL) methods, which rely on cost functions to enforce safety, often fail to achieve zero-cost performance in complex scenarios, especially vision-only tasks. These limitations are primarily due to model inaccuracies and inadequate sample efficiency. The integration of the world model has proven effective in mitigating these shortcomings. In this work, we introduce SafeDreamer, a novel algorithm incorporating Lagrangian-based methods into world model planning processes within the superior Dreamer framework. Our method achieves nearly zero-cost performance on various tasks, spanning low-dimensional and vision-only input, within the Safety-Gymnasium benchmark, showcasing its efficacy in balancing performance and safety in RL tasks. Further details can be found in the code repository: https://github.com/PKU-Alignment/SafeDreamer.

Cite

Text

Huang et al. "SafeDreamer: Safe Reinforcement Learning with World Models." International Conference on Learning Representations, 2024.

Markdown

[Huang et al. "SafeDreamer: Safe Reinforcement Learning with World Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/huang2024iclr-safedreamer/)

BibTeX

@inproceedings{huang2024iclr-safedreamer,
  title     = {{SafeDreamer: Safe Reinforcement Learning with World Models}},
  author    = {Huang, Weidong and Ji, Jiaming and Xia, Chunhe and Zhang, Borong and Yang, Yaodong},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/huang2024iclr-safedreamer/}
}