A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents
Abstract
We study risk-sensitive RL where the goal is learn a history-dependent policy that optimizes some risk measure of cumulative rewards. We consider a family of risks called the optimized certainty equivalents (OCE), which captures important risk measures such as conditional value-at-risk (CVaR), entropic risk and Markowitz’s mean-variance. In this setting, we propose two meta-algorithms: one grounded in optimism and another based on policy gradients, both of which can leverage the broad suite of risk-neutral RL algorithms in an augmented Markov Decision Process (MDP). Via a reductions approach, we leverage theory for risk-neutral RL to establish novel OCE bounds in complex, rich-observation MDPs. For the optimism-based algorithm, we prove bounds that generalize prior results in CVaR RL and that provide the first risk-sensitive bounds for exogenous block MDPs. For the gradient-based algorithm, we establish both monotone improvement and global convergence guarantees under a discrete reward assumption. Finally, we empirically show that our algorithms learn the optimal history-dependent policy in a proof-of-concept MDP, where all Markovian policies provably fail.
Cite
Text
Wang et al. "A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Wang et al. "A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/wang2025icml-reductions/)BibTeX
@inproceedings{wang2025icml-reductions,
title = {{A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents}},
author = {Wang, Kaiwen and Liang, Dawen and Kallus, Nathan and Sun, Wen},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {63636-63661},
volume = {267},
url = {https://mlanthology.org/icml/2025/wang2025icml-reductions/}
}