Recurrent World Models Facilitate Policy Evolution
Abstract
A generative recurrent neural network is quickly trained in an unsupervised manner to model popular reinforcement learning environments through compressed spatio-temporal representations. The world model's extracted features are fed into compact and simple policies trained by evolution, achieving state of the art results in various environments. We also train our agent entirely inside of an environment generated by its own internal world model, and transfer this policy back into the actual environment. Interactive version of this paper is available at https://worldmodels.github.io
Cite
Text
Ha and Schmidhuber. "Recurrent World Models Facilitate Policy Evolution." Neural Information Processing Systems, 2018.Markdown
[Ha and Schmidhuber. "Recurrent World Models Facilitate Policy Evolution." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/ha2018neurips-recurrent/)BibTeX
@inproceedings{ha2018neurips-recurrent,
title = {{Recurrent World Models Facilitate Policy Evolution}},
author = {Ha, David and Schmidhuber, Jürgen},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {2450-2462},
url = {https://mlanthology.org/neurips/2018/ha2018neurips-recurrent/}
}