Hierarchical Few-Shot Imitation with Skill Transition Models
Abstract
A desirable property of autonomous agents is the ability to both solve long-horizon problems and generalize to unseen tasks. Recent advances in data-driven skill learning have shown that extracting behavioral priors from offline data can enable agents to solve challenging long-horizon tasks with reinforcement learning. However, generalization to tasks unseen during behavioral prior training remains an outstanding challenge. To this end, we present Few-shot Imitation with Skill Transition Models (FIST), an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks given a few demonstrations at test-time. FIST learns an inverse skill dynamics model and utilizes a semi-parametric approach for imitation. We show that FIST is capable of generalizing to new tasks and substantially outperforms prior baselines in navigation experiments requiring traversing unseen parts of a large maze and 7-DoF robotic arm experiments requiring manipulating previously unseen objects in a kitchen.
Cite
Text
Hakhamaneshi et al. "Hierarchical Few-Shot Imitation with Skill Transition Models." NeurIPS 2021 Workshops: DeepRL, 2021.Markdown
[Hakhamaneshi et al. "Hierarchical Few-Shot Imitation with Skill Transition Models." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/hakhamaneshi2021neuripsw-hierarchical/)BibTeX
@inproceedings{hakhamaneshi2021neuripsw-hierarchical,
title = {{Hierarchical Few-Shot Imitation with Skill Transition Models}},
author = {Hakhamaneshi, Kourosh and Zhao, Ruihan and Zhan, Albert and Abbeel, Pieter and Laskin, Michael},
booktitle = {NeurIPS 2021 Workshops: DeepRL},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/hakhamaneshi2021neuripsw-hierarchical/}
}