World Models as Reference Trajectories for Rapid Motor Adaptation

Abstract

Learned control policies often fail when deployed in real-world environments with changing dynamics. When system dynamics shift unexpectedly, performance degrades until models are retrained on new data. We introduce Reflexive World Models (RWM), a dual control framework that uses world model predictions as implicit reference trajectories for rapid adaptation. Our method separates the control problem into long-term reward maximization through reinforcement learning and robust motor execution through reward-free rapid control in latent space. This dual architecture achieves significantly faster adaptation with low online computational cost compared to model-based RL baselines, while maintaining near-optimal performance. The approach combines the benefits of flexible policy learning through reinforcement learning with rapid error correction capabilities, providing a theoretically grounded method for maintaining performance in high-dimensional continuous control tasks under varying dynamics.

Cite

Text

Brito and McNamee. "World Models as Reference Trajectories for Rapid Motor Adaptation." Advances in Neural Information Processing Systems, 2025.

Markdown

[Brito and McNamee. "World Models as Reference Trajectories for Rapid Motor Adaptation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/brito2025neurips-world/)

BibTeX

@inproceedings{brito2025neurips-world,
  title     = {{World Models as Reference Trajectories for Rapid Motor Adaptation}},
  author    = {Brito, Carlos Stein and McNamee, Daniel C},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/brito2025neurips-world/}
}