Imagination-Augmented Agents for Deep Reinforcement Learning

Abstract

We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a trained environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several strong baselines.

Cite

Text

Racanière et al. "Imagination-Augmented Agents for Deep Reinforcement Learning." Neural Information Processing Systems, 2017.

Markdown

[Racanière et al. "Imagination-Augmented Agents for Deep Reinforcement Learning." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/racaniere2017neurips-imaginationaugmented/)

BibTeX

@inproceedings{racaniere2017neurips-imaginationaugmented,
  title     = {{Imagination-Augmented Agents for Deep Reinforcement Learning}},
  author    = {Racanière, Sébastien and Weber, Theophane and Reichert, David and Buesing, Lars and Guez, Arthur and Rezende, Danilo Jimenez and Badia, Adrià Puigdomènech and Vinyals, Oriol and Heess, Nicolas and Li, Yujia and Pascanu, Razvan and Battaglia, Peter and Hassabis, Demis and Silver, David and Wierstra, Daan},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {5690-5701},
  url       = {https://mlanthology.org/neurips/2017/racaniere2017neurips-imaginationaugmented/}
}