OrbitGrasp: SE(3)-Equivariant Grasp Learning
Abstract
While grasp detection is an important part of any robotic manipulation pipeline, reliable and accurate grasp detection in $\\mathrm{SE}(3)$ remains a research challenge. Many robotics applications in unstructured environments such as the home or warehouse would benefit a lot from better grasp performance. This paper proposes a novel framework for detecting $\mathrm{SE}(3)$ grasp poses based on point cloud input. Our main contribution is to propose an $\mathrm{SE}(3)$-equivariant model that maps each point in the cloud to a continuous grasp quality function over the 2-sphere $S^2$ using a spherical harmonic basis. Compared with reasoning about a finite set of samples, this formulation improves the accuracy and efficiency of our model when a large number of samples would otherwise be needed. In order to accomplish this, we propose a novel variation on EquiFormerV2 that leverages a UNet-style backbone to enlarge the number of points the model can handle. Our resulting method, which we name OrbitGrasp, significantly outperforms baselines in both simulation and physical experiments.
Cite
Text
Hu et al. "OrbitGrasp: SE(3)-Equivariant Grasp Learning." Proceedings of The 8th Conference on Robot Learning, 2024.Markdown
[Hu et al. "OrbitGrasp: SE(3)-Equivariant Grasp Learning." Proceedings of The 8th Conference on Robot Learning, 2024.](https://mlanthology.org/corl/2024/hu2024corl-orbitgrasp/)BibTeX
@inproceedings{hu2024corl-orbitgrasp,
title = {{OrbitGrasp: SE(3)-Equivariant Grasp Learning}},
author = {Hu, Boce and Zhu, Xupeng and Wang, Dian and Dong, Zihao and Huang, Haojie and Wang, Chenghao and Walters, Robin and Platt, Robert},
booktitle = {Proceedings of The 8th Conference on Robot Learning},
year = {2024},
pages = {2456-2474},
volume = {270},
url = {https://mlanthology.org/corl/2024/hu2024corl-orbitgrasp/}
}