Multi-View Masked World Models for Visual Robotic Manipulation
Abstract
Visual robotic manipulation research and applications often use multiple cameras, or views, to better perceive the world. How else can we utilize the richness of multi-view data? In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation. Specifically, we train a multi-view masked autoencoder which reconstructs pixels of randomly masked viewpoints and then learn a world model operating on the representations from the autoencoder. We demonstrate the effectiveness of our method in a range of scenarios, including multi-view control and single-view control with auxiliary cameras for representation learning. We also show that the multi-view masked autoencoder trained with multiple randomized viewpoints enables training a policy with strong viewpoint randomization and transferring the policy to solve real-robot tasks without camera calibration and an adaptation procedure. Video demonstrations are available at: https://sites.google.com/view/mv-mwm.
Cite
Text
Seo et al. "Multi-View Masked World Models for Visual Robotic Manipulation." International Conference on Machine Learning, 2023.Markdown
[Seo et al. "Multi-View Masked World Models for Visual Robotic Manipulation." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/seo2023icml-multiview/)BibTeX
@inproceedings{seo2023icml-multiview,
title = {{Multi-View Masked World Models for Visual Robotic Manipulation}},
author = {Seo, Younggyo and Kim, Junsu and James, Stephen and Lee, Kimin and Shin, Jinwoo and Abbeel, Pieter},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {30613-30632},
volume = {202},
url = {https://mlanthology.org/icml/2023/seo2023icml-multiview/}
}