Small Batch Deep Reinforcement Learning

Abstract

In value-based deep reinforcement learning with replay memories, the batch size parameter specifies how many transitions to sample for each gradient update. Although critical to the learning process, this value is typically not adjusted when proposing new algorithms. In this work we present a broad empirical study that suggests reducing the batch size can result in a number of significant performance gains; this is surprising, as the general tendency when training neural networks is towards larger batch sizes for improved performance. We complement our experimental findings with a set of empirical analyses towards better understanding this phenomenon.

Cite

Text

Ceron et al. "Small Batch Deep Reinforcement Learning." Neural Information Processing Systems, 2023.

Markdown

[Ceron et al. "Small Batch Deep Reinforcement Learning." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/ceron2023neurips-small/)

BibTeX

@inproceedings{ceron2023neurips-small,
  title     = {{Small Batch Deep Reinforcement Learning}},
  author    = {Ceron, Johan Obando and Bellemare, Marc and Castro, Pablo Samuel},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/ceron2023neurips-small/}
}