3D-MVP: 3D Multiview Pretraining for Manipulation

Abstract

Recent works have shown that visual pretraining on egocentric datasets using masked autoencoders (MAE) can improve generalization for downstream robotics tasks. However, these approaches pretrain only on 2D images, while many robotics applications require 3D scene understanding. In this work, we propose 3D-MVP, a novel approach for 3D Multi-View Pretraining using masked autoencoders. We leverage Robotic View Transformer (RVT), which uses a multi-view transformer to understand the 3D scene and predict gripper pose actions. We split RVT's multi-view transformer into visual encoder and action decoder, and pretrain its visual encoder using masked autoencoding on large-scale 3D datasets such as Objaverse. We evaluate 3D-MVP on a suite of virtual robot manipulation tasks and demonstrate improved performance over baselines. Our results suggest that 3D-aware pretraining is a promising approach to improve generalization of vision-based robotic manipulation policies.

Cite

Text

Qian et al. "3D-MVP: 3D Multiview Pretraining for Manipulation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02098

Markdown

[Qian et al. "3D-MVP: 3D Multiview Pretraining for Manipulation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/qian2025cvpr-3dmvp/) doi:10.1109/CVPR52734.2025.02098

BibTeX

@inproceedings{qian2025cvpr-3dmvp,
  title     = {{3D-MVP: 3D Multiview Pretraining for Manipulation}},
  author    = {Qian, Shengyi and Mo, Kaichun and Blukis, Valts and Fouhey, David F. and Fox, Dieter and Goyal, Ankit},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {22530-22539},
  doi       = {10.1109/CVPR52734.2025.02098},
  url       = {https://mlanthology.org/cvpr/2025/qian2025cvpr-3dmvp/}
}