MoVie: Visual Model-Based Policy Adaptation for View Generalization
Abstract
Visual Reinforcement Learning (RL) agents trained on limited views face significant challenges in generalizing their learned abilities to unseen views. This inherent difficulty is known as the problem of $\textit{view generalization}$. In this work, we systematically categorize this fundamental problem into four distinct and highly challenging scenarios that closely resemble real-world situations. Subsequently, we propose a straightforward yet effective approach to enable successful adaptation of visual $\textbf{Mo}$del-based policies for $\textbf{Vie}$w generalization ($\textbf{MoVie}$) during test time, without any need for explicit reward signals and any modification during training time. Our method demonstrates substantial advancements across all four scenarios encompassing a total of $\textbf{18}$ tasks sourced from DMControl, xArm, and Adroit, with a relative improvement of $\mathbf{33}$%, $\mathbf{86}$%, and $\mathbf{152}$% respectively. The superior results highlight the immense potential of our approach for real-world robotics applications. Code and videos are available at https://yangsizhe.github.io/MoVie/.
Cite
Text
Yang et al. "MoVie: Visual Model-Based Policy Adaptation for View Generalization." Neural Information Processing Systems, 2023.Markdown
[Yang et al. "MoVie: Visual Model-Based Policy Adaptation for View Generalization." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/yang2023neurips-movie/)BibTeX
@inproceedings{yang2023neurips-movie,
title = {{MoVie: Visual Model-Based Policy Adaptation for View Generalization}},
author = {Yang, Sizhe and Ze, Yanjie and Xu, Huazhe},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/yang2023neurips-movie/}
}