Averaging $n$-Step Returns Reduces Variance in Reinforcement Learning

Abstract

Multistep returns, such as $n$-step returns and $\lambda$-returns, are commonly used to improve the sample efficiency of reinforcement learning (RL) methods. The variance of the multistep returns becomes the limiting factor in their length; looking too far into the future increases variance and reverses the benefits of multistep learning. In our work, we demonstrate the ability of compound returns—weighted averages of $n$-step returns—to reduce variance. We prove for the first time that any compound return with the same contraction modulus as a given $n$-step return has strictly lower variance. We additionally prove that this variance-reduction property improves the finite-sample complexity of temporal-difference learning under linear function approximation. Because general compound returns can be expensive to implement, we introduce two-bootstrap returns which reduce variance while remaining efficient, even when using minibatched experience replay. We conduct experiments showing that compound returns often increase the sample efficiency of $n$-step deep RL agents like DQN and PPO.

Cite

Text

Daley et al. "Averaging $n$-Step Returns Reduces Variance in Reinforcement Learning." International Conference on Machine Learning, 2024.

Markdown

[Daley et al. "Averaging $n$-Step Returns Reduces Variance in Reinforcement Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/daley2024icml-averaging/)

BibTeX

@inproceedings{daley2024icml-averaging,
  title     = {{Averaging $n$-Step Returns Reduces Variance in Reinforcement Learning}},
  author    = {Daley, Brett and White, Martha and Machado, Marlos C.},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {9904-9930},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/daley2024icml-averaging/}
}