An Equivalence Between Loss Functions and Non-Uniform Sampling in Experience Replay

Abstract

Prioritized Experience Replay (PER) is a deep reinforcement learning technique in which agents learn from transitions sampled with non-uniform probability proportionate to their temporal-difference error. We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function with the same expected gradient. Surprisingly, we find in some environments PER can be replaced entirely by this new loss function without impact to empirical performance. Furthermore, this relationship suggests a new branch of improvements to PER by correcting its uniformly sampled loss function equivalent. We demonstrate the effectiveness of our proposed modifications to PER and the equivalent loss function in several MuJoCo and Atari environments.

Cite

Text

Fujimoto et al. "An Equivalence Between Loss Functions and Non-Uniform Sampling in Experience Replay." Neural Information Processing Systems, 2020.

Markdown

[Fujimoto et al. "An Equivalence Between Loss Functions and Non-Uniform Sampling in Experience Replay." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/fujimoto2020neurips-equivalence/)

BibTeX

@inproceedings{fujimoto2020neurips-equivalence,
  title     = {{An Equivalence Between Loss Functions and Non-Uniform Sampling in Experience Replay}},
  author    = {Fujimoto, Scott and Meger, David and Precup, Doina},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/fujimoto2020neurips-equivalence/}
}