DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with Physics Awareness

Abstract

A dexterous hand capable of grasping any object is essential for the development of general-purpose embodied intelligent robots. However, due to the high degree of freedom in dexterous hands and the vast diversity of objects, generating high-quality, usable grasping poses in a robust manner is a significant challenge. In this paper, we introduce DexGrasp Anything, a method that effectively integrates physical constraints into both the training and sampling phases of a diffusion-based generative model, achieving state-of-the-art performance across nearly all open datasets. Additionally, we present a new dexterous grasping dataset containing over 3.4 million diverse grasping poses for more than 15k different objects, demonstrating its potential to advance universal dexterous grasping. The code of our method and our dataset will be publicly released soon.

Cite

Text

Zhong et al. "DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with Physics Awareness." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02103

Markdown

[Zhong et al. "DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with Physics Awareness." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/zhong2025cvpr-dexgrasp/) doi:10.1109/CVPR52734.2025.02103

BibTeX

@inproceedings{zhong2025cvpr-dexgrasp,
  title     = {{DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with Physics Awareness}},
  author    = {Zhong, Yiming and Jiang, Qi and Yu, Jingyi and Ma, Yuexin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {22584-22594},
  doi       = {10.1109/CVPR52734.2025.02103},
  url       = {https://mlanthology.org/cvpr/2025/zhong2025cvpr-dexgrasp/}
}