Geometry Matching for Multi-Embodiment Grasping
Abstract
While significant progress has been made on the problem of generating grasps, many existing learning-based approaches still concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. In this paper, we tackle the problem of grasping multi-embodiments through the viewpoint of learning rich geometric representations for both objects and end-effectors using Graph Neural Networks (GNN). Our novel method – GeoMatch – applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive prediction of grasps keypoint-by-keypoint. We compare our method against 3 baselines that provide multi-embodiment support. Our approach performs better across 3 end-effectors, while also providing competitive diversity of grasps. Examples can be found at geomatch.github.io.
Cite
Text
Attarian et al. "Geometry Matching for Multi-Embodiment Grasping." Conference on Robot Learning, 2023.Markdown
[Attarian et al. "Geometry Matching for Multi-Embodiment Grasping." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/attarian2023corl-geometry/)BibTeX
@inproceedings{attarian2023corl-geometry,
title = {{Geometry Matching for Multi-Embodiment Grasping}},
author = {Attarian, Maria and Asif, Muhammad Adil and Liu, Jingzhou and Hari, Ruthrash and Garg, Animesh and Gilitschenski, Igor and Tompson, Jonathan},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {1242-1256},
volume = {229},
url = {https://mlanthology.org/corl/2023/attarian2023corl-geometry/}
}