Event Tables for Efficient Experience Replay

Abstract

Experience replay (ER) is a crucial component of many deep reinforcement learning (RL) systems. However, uniform sampling from an ER buffer can lead to slow convergence and unstable asymptotic behaviors. This paper introduces Stratified Sampling from Event Tables (SSET), which partitions an ER buffer into Event Tables, each capturing important subsequences of optimal behavior. We prove a theoretical advantage over the traditional monolithic buffer approach and combine SSET with an existing prioritized sampling strategy to further improve learning speed and stability. Empirical results in challenging MiniGrid domains, benchmark RL environments, and a high-fidelity car racing simulator demonstrate the advantages and versatility of SSET over existing ER buffer sampling

Cite

Text

Kompella et al. "Event Tables for Efficient Experience Replay." Transactions on Machine Learning Research, 2023.

Markdown

[Kompella et al. "Event Tables for Efficient Experience Replay." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/kompella2023tmlr-event/)

BibTeX

@article{kompella2023tmlr-event,
  title     = {{Event Tables for Efficient Experience Replay}},
  author    = {Kompella, Varun Raj and Walsh, Thomas and Barrett, Samuel and Wurman, Peter R. and Stone, Peter},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/kompella2023tmlr-event/}
}