DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-Scale Synthetic Cluttered Scenes
Abstract
Grasping in cluttered scenes remains highly challenging for dexterous hands due to the scarcity of data. To address this problem, we present a large-scale synthetic dataset, encompassing 1319 objects, 8270 scenes, and 426 million grasps. Beyond benchmarking, we also explore data-efficient learning strategies from grasping data. We reveal that the combination of a conditional generative model that focuses on local geometry and a grasp dataset that emphasizes complex scene variations is key to achieving effective generalization. Our proposed generative method outperforms all baselines in simulation experiments. Furthermore, it demonstrates zero-shot sim-to-real transfer through test-time depth restoration, attaining 91% real-world success rate, showcasing the robust potential of utilizing fully synthetic training data.
Cite
Text
Zhang et al. "DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-Scale Synthetic Cluttered Scenes." Proceedings of The 8th Conference on Robot Learning, 2024.Markdown
[Zhang et al. "DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-Scale Synthetic Cluttered Scenes." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/zhang2024corl-dexgraspnet/)BibTeX
@inproceedings{zhang2024corl-dexgraspnet,
title = {{DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-Scale Synthetic Cluttered Scenes}},
author = {Zhang, Jialiang and Liu, Haoran and Li, Danshi and Yu, XinQiang and Geng, Haoran and Ding, Yufei and Chen, Jiayi and Wang, He},
booktitle = {Proceedings of The 8th Conference on Robot Learning},
year = {2024},
pages = {5106-5133},
volume = {270},
url = {https://mlanthology.org/corl/2024/zhang2024corl-dexgraspnet/}
}