World Models as Reference Trajectories for Rapid Motor Adaptation
Abstract
Deploying learned control policies in real-world environments poses a fundamental challenge: when system dynamics change unexpectedly, performance degrades until models are retrained on new data. We introduce a dual control framework that uses world model predictions as implicit reference trajectories for rapid adaptation, while preserving the policy's optimal behavior. Our method separates the control problem into long-term reward maximization through reinforcement learning and robust motor execution through rapid latent control. In continuous control tasks under varying dynamics, this achieves significantly faster adaptation compared to model-based RL baselines while maintaining near-optimal performance. This dual architecture combines the benefits of flexible policy learning through reinforcement learning with the robust adaptation capabilities of classical control, providing a principled approach to maintaining performance in high-dimensional locomotion tasks under varying dynamics.
Cite
Text
Brito and McNamee. "World Models as Reference Trajectories for Rapid Motor Adaptation." ICLR 2025 Workshops: WRL, 2025.Markdown
[Brito and McNamee. "World Models as Reference Trajectories for Rapid Motor Adaptation." ICLR 2025 Workshops: WRL, 2025.](https://mlanthology.org/iclrw/2025/brito2025iclrw-world/)BibTeX
@inproceedings{brito2025iclrw-world,
title = {{World Models as Reference Trajectories for Rapid Motor Adaptation}},
author = {Brito, Carlos Stein and McNamee, Daniel C},
booktitle = {ICLR 2025 Workshops: WRL},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/brito2025iclrw-world/}
}