Safety-Polarized and Prioritized Reinforcement Learning

Abstract

Motivated by the first priority of safety in many real-world applications, we propose MaxSafe, a chance-constrained bi-level optimization framework for safe reinforcement learning. MaxSafe first minimizes the unsafe probability and then maximizes the return among the safest policies. We provide a tailored Q-learning algorithm for the MaxSafe objective, featuring a novel learning process for optimal action masks with theoretical convergence guarantees. To enable the application of our algorithm to large-scale experiments, we introduce two key techniques: safety polarization and safety prioritized experience replay. Safety polarization generalizes the optimal action masking by polarizing the Q-function, which assigns low values to unsafe state-action pairs, effectively discouraging their selection. In parallel, safety prioritized experience replay enhances the learning of optimal action masks by prioritizing samples based on temporal-difference (TD) errors derived from our proposed state-action reachability estimation functions. This approach efficiently addresses the challenges posed by sparse cost signals. Experiments on diverse autonomous driving and safe control tasks show that our methods achieve near-maximal safety and an optimal reward-safety trade-off.

Cite

Text

Fan et al. "Safety-Polarized and Prioritized Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Fan et al. "Safety-Polarized and Prioritized Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/fan2025icml-safetypolarized/)

BibTeX

@inproceedings{fan2025icml-safetypolarized,
  title     = {{Safety-Polarized and Prioritized Reinforcement Learning}},
  author    = {Fan, Ke and Zhang, Jinpeng and Zhang, Xuefeng and Wu, Yunze and Cao, Jingyu and Zhou, Yuan and Ma, Jianzhu},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {15862-15886},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/fan2025icml-safetypolarized/}
}