Conservative Safety Critics for Exploration

Abstract

Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning. In this paper, we target the problem of safe exploration in RL, by learning a conservative safety estimate of environment states through a critic, and provably upper bound the likelihood of catastrophic failures at every training iteration. We theoretically characterize the tradeoff between safety and policy improvement, show that the safety constraints are satisfied with high probability during training, derive provable convergence guarantees for our approach which is no worse asymptotically then standard RL, and empirically demonstrate the efficacy of the proposed approach on a suite of challenging navigation, manipulation, and locomotion tasks. Our results demonstrate that the proposed approach can achieve competitive task performance, while incurring significantly lower catastrophic failure rates during training as compared to prior methods. Videos are at this URL https://sites.google.com/view/conservative-safety-critics/

Cite

Text

Bharadhwaj et al. "Conservative Safety Critics for Exploration." International Conference on Learning Representations, 2021.

Markdown

[Bharadhwaj et al. "Conservative Safety Critics for Exploration." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/bharadhwaj2021iclr-conservative/)

BibTeX

@inproceedings{bharadhwaj2021iclr-conservative,
  title     = {{Conservative Safety Critics for Exploration}},
  author    = {Bharadhwaj, Homanga and Kumar, Aviral and Rhinehart, Nicholas and Levine, Sergey and Shkurti, Florian and Garg, Animesh},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/bharadhwaj2021iclr-conservative/}
}