Hindsight Experience Replay

Abstract

Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https://goo.gl/SMrQnI.

Cite

Text

Andrychowicz et al. "Hindsight Experience Replay." Neural Information Processing Systems, 2017.

Markdown

[Andrychowicz et al. "Hindsight Experience Replay." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/andrychowicz2017neurips-hindsight/)

BibTeX

@inproceedings{andrychowicz2017neurips-hindsight,
  title     = {{Hindsight Experience Replay}},
  author    = {Andrychowicz, Marcin and Wolski, Filip and Ray, Alex and Schneider, Jonas and Fong, Rachel and Welinder, Peter and McGrew, Bob and Tobin, Josh and Abbeel, OpenAI Pieter and Zaremba, Wojciech},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {5048-5058},
  url       = {https://mlanthology.org/neurips/2017/andrychowicz2017neurips-hindsight/}
}