Region-Aware Grasp Framework with Normalized Grasp Space for Efficient 6-DoF Grasping
Abstract
A series of region-based methods succeed in extracting regional features and enhancing grasp detection quality. However, faced with a cluttered scene with potential collision, the definition of the grasp-relevant region stays inconsistent. In this paper, we propose Normalized Grasp Space (NGS) from a novel region-aware viewpoint, unifying the grasp representation within a normalized regional space and benefiting the generalizability of methods. Leveraging the NGS, we find that CNNs are underestimated for 3D feature extraction and 6-DoF grasp detection in clutter scenes and build a highly efficient Region-aware Normalized Grasp Network (RNGNet). Experiments on the public benchmark show that our method achieves significant >20 % performance gains while attaining a real-time inference speed of approximately 50 FPS. Real-world cluttered scene clearance experiments underscore the effectiveness of our method. Further, human-to-robot handover and dynamic object grasping experiments demonstrate the potential of our proposed method for closed-loop grasping in dynamic scenarios.
Cite
Text
Chen et al. "Region-Aware Grasp Framework with Normalized Grasp Space for Efficient 6-DoF Grasping." Proceedings of The 8th Conference on Robot Learning, 2024.Markdown
[Chen et al. "Region-Aware Grasp Framework with Normalized Grasp Space for Efficient 6-DoF Grasping." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/chen2024corl-regionaware/)BibTeX
@inproceedings{chen2024corl-regionaware,
title = {{Region-Aware Grasp Framework with Normalized Grasp Space for Efficient 6-DoF Grasping}},
author = {Chen, Siang and Xie, Pengwei and Tang, Wei and Hu, Dingchang and Dai, Yixiang and Wang, Guijin},
booktitle = {Proceedings of The 8th Conference on Robot Learning},
year = {2024},
pages = {1834-1850},
volume = {270},
url = {https://mlanthology.org/corl/2024/chen2024corl-regionaware/}
}