Demonstration-Guided Reinforcement Learning with Learned Skills
Abstract
Demonstration-guided reinforcement learning (RL) is a promising approach for learning complex behaviors by leveraging both reward feedback and a set of target task demonstrations. Prior approaches for demonstration-guided RL treat every new task as an independent learning problem and attempt to follow the provided demonstrations step-by-step, akin to a human trying to imitate a completely unseen behavior by following the demonstrator's exact muscle movements. Naturally, such learning will be slow, but often new behaviors are not completely unseen: they share subtasks with behaviors we have previously learned. In this work, we aim to exploit this shared subtask structure to increase the efficiency of demonstration-guided RL. We first learn a set of reusable skills from large offline datasets of prior experience collected across many tasks. We then propose an algorithm for demonstration-guided RL that efficiently leverages the provided demonstrations by following the demonstrated skills instead of the primitive actions, resulting in substantial performance improvements over prior demonstration-guided RL approaches. We validate the effectiveness of our approach on long-horizon maze navigation and complex robot manipulation tasks.
Cite
Text
Anonymous. "Demonstration-Guided Reinforcement Learning with Learned Skills." ICLR 2021 Workshops: SSL-RL, 2021.Markdown
[Anonymous. "Demonstration-Guided Reinforcement Learning with Learned Skills." ICLR 2021 Workshops: SSL-RL, 2021.](https://mlanthology.org/iclrw/2021/anonymous2021iclrw-demonstrationguided/)BibTeX
@inproceedings{anonymous2021iclrw-demonstrationguided,
title = {{Demonstration-Guided Reinforcement Learning with Learned Skills}},
author = {Anonymous, },
booktitle = {ICLR 2021 Workshops: SSL-RL},
year = {2021},
url = {https://mlanthology.org/iclrw/2021/anonymous2021iclrw-demonstrationguided/}
}