Stable Gradients for Stable Learning at Scale in Deep Reinforcement Learning

Abstract

Scaling deep reinforcement learning networks is challenging and often results in degraded performance, yet the root causes of this failure mode remain poorly understood. Several recent works have proposed mechanisms to address this, but they are often complex and fail to highlight the causes underlying this difficulty. In this work, we conduct a series of empirical analyses which suggest that the combination of non-stationarity with gradient pathologies, due to suboptimal architectural choices, underlie the challenges of scale. We propose a series of direct interventions that stabilize gradient flow, enabling robust performance across a range of network depths and widths. Our interventions are simple to implement and compatible with well-established algorithms, and result in an effective mechanism that enables strong performance even at large scales. We validate our findings on a variety of agents and suites of environments.

Cite

Text

Castanyer et al. "Stable Gradients for Stable Learning at Scale in Deep Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Castanyer et al. "Stable Gradients for Stable Learning at Scale in Deep Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/castanyer2025neurips-stable/)

BibTeX

@inproceedings{castanyer2025neurips-stable,
  title     = {{Stable Gradients for Stable Learning at Scale in Deep Reinforcement Learning}},
  author    = {Castanyer, Roger Creus and Obando-Ceron, Johan and Li, Lu and Bacon, Pierre-Luc and Berseth, Glen and Courville, Aaron and Castro, Pablo Samuel},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/castanyer2025neurips-stable/}
}