Imagined Autocurricula

Abstract

Training agents to act in embodied environments typically requires vast training data or access to accurate simulation, neither of which exists for many cases in the real world. Instead, world models are emerging as an alternative–leveraging offline, passively collected data, they make it possible to generate diverse worlds for training agents in simulation. In this work, we harness world models to generate “imagined” environments to train robust agents capable of generalizing to novel task variations. One of the challenges in doing this is ensuring the agent trains on useful generated data. We thus propose a novel approach IMAC (Imagined Autocurricula) leveraging Unsupervised Environment Design (UED), induces an automatic curriculum over generated worlds. In a series of challenging, procedurally generated environments, we show it is possible to achieve strong transfer performance on held-out environments having trained only inside a world model learned from a narrower dataset. We believe this opens the path to utilizing larger-scale, foundation world models for generally capable agents.

Cite

Text

Güzel et al. "Imagined Autocurricula." Advances in Neural Information Processing Systems, 2025.

Markdown

[Güzel et al. "Imagined Autocurricula." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/guzel2025neurips-imagined/)

BibTeX

@inproceedings{guzel2025neurips-imagined,
  title     = {{Imagined Autocurricula}},
  author    = {Güzel, Ahmet H. and Jackson, Matthew Thomas and Liesen, Jarek Luca and Rocktäschel, Tim and Foerster, Jakob Nicolaus and Bogunovic, Ilija and Parker-Holder, Jack},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/guzel2025neurips-imagined/}
}