Multi-Modal Imitation Learning from Unstructured Demonstrations Using Generative Adversarial Nets
Abstract
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy.
Cite
Text
Hausman et al. "Multi-Modal Imitation Learning from Unstructured Demonstrations Using Generative Adversarial Nets." Neural Information Processing Systems, 2017.Markdown
[Hausman et al. "Multi-Modal Imitation Learning from Unstructured Demonstrations Using Generative Adversarial Nets." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/hausman2017neurips-multimodal/)BibTeX
@inproceedings{hausman2017neurips-multimodal,
title = {{Multi-Modal Imitation Learning from Unstructured Demonstrations Using Generative Adversarial Nets}},
author = {Hausman, Karol and Chebotar, Yevgen and Schaal, Stefan and Sukhatme, Gaurav and Lim, Joseph J.},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {1235-1245},
url = {https://mlanthology.org/neurips/2017/hausman2017neurips-multimodal/}
}