Scaling Laws for Pre-Training Agents and World Models

Abstract

The performance of embodied agents has been shown to improve by increasing model parameters, dataset size, and compute. This has been demonstrated in domains from robotics to video games, when generative learning objectives on offline datasets (pre-training) are used to model an agent's behavior (imitation learning) or their environment (world modeling). This paper characterizes the role of scale in these tasks more precisely. Going beyond the simple intuition that `bigger is better', we show that the same types of power laws found in language modeling also arise in world modeling and imitation learning (e.g. between loss and optimal model size). However, the coefficients of these laws are heavily influenced by the tokenizer, task \& architecture -- this has important implications on the optimal sizing of models and data.

Cite

Text

Pearce et al. "Scaling Laws for Pre-Training Agents and World Models." ICLR 2025 Workshops: World_Models, 2025.

Markdown

[Pearce et al. "Scaling Laws for Pre-Training Agents and World Models." ICLR 2025 Workshops: World_Models, 2025.](https://mlanthology.org/iclrw/2025/pearce2025iclrw-scaling/)

BibTeX

@inproceedings{pearce2025iclrw-scaling,
  title     = {{Scaling Laws for Pre-Training Agents and World Models}},
  author    = {Pearce, Tim and Rashid, Tabish and Bignell, David and Georgescu, Raluca and Devlin, Sam and Hofmann, Katja},
  booktitle = {ICLR 2025 Workshops: World_Models},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/pearce2025iclrw-scaling/}
}