Transformers Are Sample-Efficient World Models

Abstract

Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems. Recently, many model-based methods have been designed to address this issue, with learning in the imagination of a world model being one of the most prominent approaches. However, while virtually unlimited interaction with a simulated environment sounds appealing, the world model has to be accurate over extended periods of time. Motivated by the success of Transformers in sequence modeling tasks, we introduce IRIS, a data-efficient agent that learns in a world model composed of a discrete autoencoder and an autoregressive Transformer. With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS achieves a mean human normalized score of 1.046, and outperforms humans on 10 out of 26 games, setting a new state of the art for methods without lookahead search. To foster future research on Transformers and world models for sample-efficient reinforcement learning, we release our codebase at https://github.com/eloialonso/iris.

Cite

Text

Micheli et al. "Transformers Are Sample-Efficient World Models." NeurIPS 2022 Workshops: DeepRL, 2022.

Markdown

[Micheli et al. "Transformers Are Sample-Efficient World Models." NeurIPS 2022 Workshops: DeepRL, 2022.](https://mlanthology.org/neuripsw/2022/micheli2022neuripsw-transformers/)

BibTeX

@inproceedings{micheli2022neuripsw-transformers,
  title     = {{Transformers Are Sample-Efficient World Models}},
  author    = {Micheli, Vincent and Alonso, Eloi and Fleuret, François},
  booktitle = {NeurIPS 2022 Workshops: DeepRL},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/micheli2022neuripsw-transformers/}
}