State Entropy Regularization for Robust Reinforcement Learning

Abstract

State entropy regularization has empirically shown better exploration and sample complexity in reinforcement learning (RL). However, its theoretical guarantees have not been studied. In this paper, we show that state entropy regularization improves robustness to structured and spatially correlated perturbations. These types of variation are common in transfer learning but often overlooked by standard robust RL methods, which typically focus on small, uncorrelated changes. We provide a comprehensive characterization of these robustness properties, including formal guarantees under reward and transition uncertainty, as well as settings where the method performs poorly. Much of our analysis contrasts state entropy with the widely used policy entropy regularization, highlighting their different benefits. Finally, from a practical standpoint, we illustrate that compared with policy entropy, the robustness advantages of state entropy are more sensitive to the number of rollouts used for policy evaluation.

Cite

Text

Ashlag et al. "State Entropy Regularization for Robust Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Ashlag et al. "State Entropy Regularization for Robust Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/ashlag2025neurips-state/)

BibTeX

@inproceedings{ashlag2025neurips-state,
  title     = {{State Entropy Regularization for Robust Reinforcement Learning}},
  author    = {Ashlag, Yonatan and Koren, Uri and Mutti, Mirco and Derman, Esther and Bacon, Pierre-Luc and Mannor, Shie},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/ashlag2025neurips-state/}
}