Fuz-RL: A Fuzzy-Guided Robust Framework for Safe Reinforcement Learning Under Uncertainty
Abstract
Safe Reinforcement Learning (RL) is crucial for achieving high performance while ensuring safety in real-world applications. However, the complex interplay of multiple uncertainty sources in real environments poses significant challenges for interpretable risk assessment and robust decision-making. To address these challenges, we propose Fuz-RL, a fuzzy measure-guided robust framework for safe RL. Specifically, our framework develops a novel fuzzy Bellman operator for estimating robust value functions using Choquet integrals. Theoretically, we prove that solving the Fuz-RL problem (in Constrained Markov Decision Process (CMDP) form) is equivalent to solving distributionally robust safe RL problems (in robust CMDP form), effectively reformulating the min-max optimization problem into a tractable CMDP with Choquet-integrated value functions. Empirical analyses on safe-control-gym and safety-gymnasium scenarios demonstrate that Fuz-RL effectively integrates with existing safe RL baselines in a model-free manner, significantly improving both safety and control performance under various types of uncertainties in observation, action, and dynamics.
Cite
Text
Wan et al. "Fuz-RL: A Fuzzy-Guided Robust Framework for Safe Reinforcement Learning Under Uncertainty." Advances in Neural Information Processing Systems, 2025.Markdown
[Wan et al. "Fuz-RL: A Fuzzy-Guided Robust Framework for Safe Reinforcement Learning Under Uncertainty." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/wan2025neurips-fuzrl/)BibTeX
@inproceedings{wan2025neurips-fuzrl,
title = {{Fuz-RL: A Fuzzy-Guided Robust Framework for Safe Reinforcement Learning Under Uncertainty}},
author = {Wan, Xu and Yang, Chao and Yang, Cheng and Song, Jie and Sun, Mingyang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/wan2025neurips-fuzrl/}
}