Toward Human-like Grasp: Dexterous Grasping via Semantic Representation of Object-Hand

Abstract

In recent years, many dexterous robotic hands have been designed to assist or replace human hands in executing various tasks. But how to teach them to perform dexterous operations like human hands is still a challenging task. In this paper, we propose a grasp synthesis framework to make robots grasp and manipulate objects like human beings. We first build a dataset by accurately segmenting the functional areas of the object and annotating semantic touch code for each functional area to guide the dexterous hand to complete the functional grasp and post-grasp manipulation. This dataset contains 18 categories of 129 objects selected from four datasets, and 15 people participated in data annotation. Then we carefully design four loss functions to constrain the network, which successfully generates the functional grasp of dexterous hand under the guidance of semantic touch code. The thorough experiments in synthetic data show our model can robustly generate functional grasp, even for objects that the model has not see before.

Cite

Text

Zhu et al. "Toward Human-like Grasp: Dexterous Grasping via Semantic Representation of Object-Hand." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01545

Markdown

[Zhu et al. "Toward Human-like Grasp: Dexterous Grasping via Semantic Representation of Object-Hand." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/zhu2021iccv-humanlike/) doi:10.1109/ICCV48922.2021.01545

BibTeX

@inproceedings{zhu2021iccv-humanlike,
  title     = {{Toward Human-like Grasp: Dexterous Grasping via Semantic Representation of Object-Hand}},
  author    = {Zhu, Tianqiang and Wu, Rina and Lin, Xiangbo and Sun, Yi},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {15741-15751},
  doi       = {10.1109/ICCV48922.2021.01545},
  url       = {https://mlanthology.org/iccv/2021/zhu2021iccv-humanlike/}
}