SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition
Abstract
Methods that extract policy primitives from offline demonstrations using deep generative models have shown promise at accelerating reinforcement learning (RL) for new tasks. Intuitively, these methods should also help to train safe RL agents because they enforce useful skills. However, we identify these techniques are not well equipped for safe policy learning because they ignore negative experiences (e.g., unsafe or unsuccessful), focusing only on positive experiences, which harms their ability to generalize to new tasks safely. Rather, we model the latent safety context using principled contrastive training on an offline dataset of demonstrations from many tasks, including both negative and positive experiences. Using this latent variable, our RL framework, SAFEty skill pRiors (SAFER) extracts task specific safe primitive skills to safely and successfully generalize to new tasks. In the inference stage, policies trained with SAFER learn to compose safe skills into successful policies. We theoretically characterize why SAFER can enforce safe policy learning and demonstrate its effectiveness on several complex safety- critical robotic grasping tasks inspired by the game Operation, in which SAFER outperforms state-of-the-art primitive learning methods in success and safety.
Cite
Text
Slack et al. "SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition." ICML 2022 Workshops: DARL, 2022.Markdown
[Slack et al. "SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition." ICML 2022 Workshops: DARL, 2022.](https://mlanthology.org/icmlw/2022/slack2022icmlw-safer/)BibTeX
@inproceedings{slack2022icmlw-safer,
title = {{SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition}},
author = {Slack, Dylan Z and Chow, Yinlam and Dai, Bo and Wichers, Nevan},
booktitle = {ICML 2022 Workshops: DARL},
year = {2022},
url = {https://mlanthology.org/icmlw/2022/slack2022icmlw-safer/}
}