Information Asymmetry in KL-Regularized RL

Abstract

Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start from the KL regularized expected reward objective which introduces an additional component, a default policy. Instead of relying on a fixed default policy, we learn it from data. But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster. We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm. We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning. Please watch the video demonstrating learned experts and default policies on several continuous control tasks ( https://youtu.be/U2qA3llzus8 ).

Cite

Text

Galashov et al. "Information Asymmetry in KL-Regularized RL." International Conference on Learning Representations, 2019.

Markdown

[Galashov et al. "Information Asymmetry in KL-Regularized RL." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/galashov2019iclr-information/)

BibTeX

@inproceedings{galashov2019iclr-information,
  title     = {{Information Asymmetry in KL-Regularized RL}},
  author    = {Galashov, Alexandre and Jayakumar, Siddhant M. and Hasenclever, Leonard and Tirumala, Dhruva and Schwarz, Jonathan and Desjardins, Guillaume and Czarnecki, Wojciech M. and Teh, Yee Whye and Pascanu, Razvan and Heess, Nicolas},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/galashov2019iclr-information/}
}