SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments
Abstract
Every living organism struggles against disruptive environmental forces to carve out and maintain an orderly niche. We propose that such a struggle to achieve and preserve order might offer a principle for the emergence of useful behaviors in artificial agents. We formalize this idea into an unsupervised reinforcement learning method called surprise minimizing reinforcement learning (SMiRL). SMiRL alternates between learning a density model to evaluate the surprise of a stimulus, and improving the policy to seek more predictable stimuli. The policy seeks out stable and repeatable situations that counteract the environment's prevailing sources of entropy. This might include avoiding other hostile agents, or finding a stable, balanced pose for a bipedal robot in the face of disturbance forces. We demonstrate that our surprise minimizing agents can successfully play Tetris, Doom, control a humanoid to avoid falls, and navigate to escape enemies in a maze without any task-specific reward supervision. We further show that SMiRL can be used together with standard task rewards to accelerate reward-driven learning.
Cite
Text
Berseth et al. "SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments." International Conference on Learning Representations, 2021.Markdown
[Berseth et al. "SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/berseth2021iclr-smirl/)BibTeX
@inproceedings{berseth2021iclr-smirl,
title = {{SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments}},
author = {Berseth, Glen and Geng, Daniel and Devin, Coline Manon and Rhinehart, Nicholas and Finn, Chelsea and Jayaraman, Dinesh and Levine, Sergey},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/berseth2021iclr-smirl/}
}