One-Shot Imitation Learning: A Pose Estimation Perspective
Abstract
In this paper, we study imitation learning under the challenging setting of: (1) only a single demonstration, (2) no further data collection, and (3) no prior task or object knowledge. We show how, with these constraints, imitation learning can be formulated as a combination of trajectory transfer and unseen object pose estimation. To explore this idea, we provide an in-depth study on how state-of-the-art unseen object pose estimators perform for one-shot imitation learning on ten real-world tasks, and we take a deep dive into the effects that camera calibration, pose estimation error, and spatial generalisation have on task success rates. For videos, please visit www.robot-learning.uk/pose-estimation-perspective.
Cite
Text
Vitiello et al. "One-Shot Imitation Learning: A Pose Estimation Perspective." Conference on Robot Learning, 2023.Markdown
[Vitiello et al. "One-Shot Imitation Learning: A Pose Estimation Perspective." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/vitiello2023corl-oneshot/)BibTeX
@inproceedings{vitiello2023corl-oneshot,
title = {{One-Shot Imitation Learning: A Pose Estimation Perspective}},
author = {Vitiello, Pietro and Dreczkowski, Kamil and Johns, Edward},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {943-970},
volume = {229},
url = {https://mlanthology.org/corl/2023/vitiello2023corl-oneshot/}
}