3D Equivariant Visuomotor Policy Learning via Spherical Projection
Abstract
Equivariant models have recently been shown to improve the data efficiency of diffusion policy by a significant margin. However, prior work that explored this direction focused primarily on point cloud inputs generated by multiple cameras fixed in the workspace. This type of point cloud input is not compatible with the now-common setting where the primary input modality is an eye-in-hand RGB camera like a GoPro. This paper closes this gap by incorporating into the diffusion policy model a process that projects features from the 2D RGB camera image onto a sphere. This enables us to reason about symmetries in $\mathrm{SO}(3)$ without explicitly reconstructing a point cloud. We perform extensive experiments in both simulation and the real world that demonstrate that our method consistently outperforms strong baselines in terms of both performance and sample efficiency. Our work, $\textbf{Image-to-Sphere Policy}$ ($\textbf{ISP}$), is the first $\mathrm{SO}(3)$-equivariant policy learning framework for robotic manipulation that works using only monocular RGB inputs.
Cite
Text
Hu et al. "3D Equivariant Visuomotor Policy Learning via Spherical Projection." Advances in Neural Information Processing Systems, 2025.Markdown
[Hu et al. "3D Equivariant Visuomotor Policy Learning via Spherical Projection." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/hu2025neurips-3d/)BibTeX
@inproceedings{hu2025neurips-3d,
title = {{3D Equivariant Visuomotor Policy Learning via Spherical Projection}},
author = {Hu, Boce and Wang, Dian and Klee, David and Tian, Heng and Zhu, Xupeng and Huang, Haojie and Platt, Robert and Walters, Robin},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/hu2025neurips-3d/}
}