An Information-Theoretic Perspective on Intrinsic Motivation in Reinforcement Learning
Abstract
The standard reinforcement learning (RL) framework faces the problem of transfer learning and sparse rewards explorations. To address these problems, a large number of heterogeneous intrinsic motivation have been proposed, like reaching unpredictable states or unvisited states. Yet, it lacks a coherent view on these intrinsic motivations, making hard to understand their relations as well as their underlying assumptions. Here, we propose a new taxonomy of intrinsic motivations based on information theory: we computationally revisit the notions of surprise, novelty and skill learning and identify their main implementations through a short review of intrinsic motivations in RL. Our information theoretic analysis paves the way towards an unifying view over complex behaviors, thereby supporting the development of new objective functions.
Cite
Text
Aubret et al. "An Information-Theoretic Perspective on Intrinsic Motivation in Reinforcement Learning." NeurIPS 2022 Workshops: InfoCog, 2022.Markdown
[Aubret et al. "An Information-Theoretic Perspective on Intrinsic Motivation in Reinforcement Learning." NeurIPS 2022 Workshops: InfoCog, 2022.](https://mlanthology.org/neuripsw/2022/aubret2022neuripsw-informationtheoretic/)BibTeX
@inproceedings{aubret2022neuripsw-informationtheoretic,
title = {{An Information-Theoretic Perspective on Intrinsic Motivation in Reinforcement Learning}},
author = {Aubret, Arthur and Matignon, Laetitia and Hassas, Salima},
booktitle = {NeurIPS 2022 Workshops: InfoCog},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/aubret2022neuripsw-informationtheoretic/}
}