Hindsight Expectation Maximization for Goal-Conditioned Reinforcement Learning

Abstract

We propose a graphical model framework for goal-conditioned RL, with an EM algorithm that operates on the lower bound of the RL objective. The E-step provides a natural interpretation of how ’learning in hindsight’ techniques, such as HER, to handle extremely sparse goal-conditioned rewards. The M-step reduces policy optimization to supervised learning updates, which greatly stabilizes end-to-end training on high-dimensional inputs such as images. We show that the combined algorithm, hEM significantly outperforms model-free baselines on a wide range of goal-conditioned benchmarks with sparse rewards.

Cite

Text

Tang and Kucukelbir. "Hindsight Expectation Maximization for Goal-Conditioned Reinforcement Learning." Artificial Intelligence and Statistics, 2021.

Markdown

[Tang and Kucukelbir. "Hindsight Expectation Maximization for Goal-Conditioned Reinforcement Learning." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/tang2021aistats-hindsight/)

BibTeX

@inproceedings{tang2021aistats-hindsight,
  title     = {{Hindsight Expectation Maximization for Goal-Conditioned Reinforcement Learning}},
  author    = {Tang, Yunhao and Kucukelbir, Alp},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {2863-2871},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/tang2021aistats-hindsight/}
}