Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees

Abstract

Safety is a critical component of autonomous systems and remains a challenge for learning-based policies to be utilized in the real world. In this paper, we propose Sim-to-Lab-to-Real to bridge the reality gap with a probabilistically guaranteed safety-aware policy distribution.. To improve safety, we apply a dual policy setup where a performance policy is trained using the cumulative task reward and a backup (safety) policy is trained by solving the safety Bellman Equation based on Hamilton-Jacobi reachability analysis. In \textit{Sim-to-Lab} transfer, we apply a supervisory control scheme to shield unsafe actions during exploration; in \textit{Lab-to-Real} transfer, we leverage the Probably Approximately Correct (PAC)-Bayes framework to provide lower bounds on the expected performance and safety of policies in unseen environments. We empirically study the [proposed framework for ego-vision navigation in two types of indoor environments including a photo-realistic one. We also demonstrate strong generalization performance through hardware experiments in real indoor spaces with a quadrupedal robot.

Cite

Text

Hsu et al. "Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees." NeurIPS 2022 Workshops: TEA, 2022.

Markdown

[Hsu et al. "Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees." NeurIPS 2022 Workshops: TEA, 2022.](https://mlanthology.org/neuripsw/2022/hsu2022neuripsw-simtolabtoreal/)

BibTeX

@inproceedings{hsu2022neuripsw-simtolabtoreal,
  title     = {{Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees}},
  author    = {Hsu, Kai-Chieh and Ren, Allen Z. and Nguyen, Duy Phuong and Majumdar, Anirudha and Fisac, Jaime Fernández},
  booktitle = {NeurIPS 2022 Workshops: TEA},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/hsu2022neuripsw-simtolabtoreal/}
}