Hierarchical Few-Shot Imitation with Skill Transition Models
Abstract
A desirable property of autonomous agents is the ability to both solve long-horizon problems and generalize to unseen tasks. Recent advances in data-driven skill learning have shown that extracting behavioral priors from offline data can enable agents to solve challenging long-horizon tasks with reinforcement learning. However, generalization to tasks unseen during behavioral prior training remains an outstanding challenge. To this end, we present Few-shot Imitation with Skill Transition Models (FIST), an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks given a few downstream demonstrations. FIST learns an inverse skill dynamics model, a distance function, and utilizes a semi-parametric approach for imitation. We show that FIST is capable of generalizing to new tasks and substantially outperforms prior baselines in navigation experiments requiring traversing unseen parts of a large maze and 7-DoF robotic arm experiments requiring manipulating previously unseen objects in a kitchen.
Cite
Text
Hakhamaneshi et al. "Hierarchical Few-Shot Imitation with Skill Transition Models." International Conference on Learning Representations, 2022.Markdown
[Hakhamaneshi et al. "Hierarchical Few-Shot Imitation with Skill Transition Models." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/hakhamaneshi2022iclr-hierarchical/)BibTeX
@inproceedings{hakhamaneshi2022iclr-hierarchical,
title = {{Hierarchical Few-Shot Imitation with Skill Transition Models}},
author = {Hakhamaneshi, Kourosh and Zhao, Ruihan and Zhan, Albert and Abbeel, Pieter and Laskin, Michael},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/hakhamaneshi2022iclr-hierarchical/}
}