Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation

Abstract

Dexterous robotic hands have the capability to interact with a wide variety of household objects. However, learning robust real world grasping policies for arbitrary objects has proven challenging due to the difficulty of generating high quality training data. In this work, we propose a learning system (\emph{ISAGrasp}) for leveraging a small number of human demonstrations to bootstrap the generation of a much larger dataset containing successful grasps on a variety of novel objects. Our key insight is to use a correspondence-aware implicit generative model to deform object meshes and demonstrated human grasps in order to create a diverse dataset for supervised learning, while maintaining semantic realism. We use this dataset to train a robust grasping policy in simulation which can be deployed in the real world. We demonstrate grasping performance with a four-fingered Allegro hand in both simulation and the real world, and show this method can handle entirely new semantic classes and achieve a 79% success rate on grasping unseen objects in the real world.

Cite

Text

Chen et al. "Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation." Conference on Robot Learning, 2022.

Markdown

[Chen et al. "Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/chen2022corl-learning/)

BibTeX

@inproceedings{chen2022corl-learning,
  title     = {{Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation}},
  author    = {Chen, Qiuyu and Van Wyk, Karl and Chao, Yu-Wei and Yang, Wei and Mousavian, Arsalan and Gupta, Abhishek and Fox, Dieter},
  booktitle = {Conference on Robot Learning},
  year      = {2022},
  pages     = {1222-1232},
  volume    = {205},
  url       = {https://mlanthology.org/corl/2022/chen2022corl-learning/}
}