Regret Bounds for Risk-Sensitive Reinforcement Learning

Abstract

In safety-critical applications of reinforcement learning such as healthcare and robotics, it is often desirable to optimize risk-sensitive objectives that account for tail outcomes rather than expected reward. We prove the first regret bounds for reinforcement learning under a general class of risk-sensitive objectives including the popular CVaR objective. Our theory is based on a novel characterization of the CVaR objective as well as a novel optimistic MDP construction.

Cite

Text

Bastani et al. "Regret Bounds for Risk-Sensitive Reinforcement Learning." Neural Information Processing Systems, 2022.

Markdown

[Bastani et al. "Regret Bounds for Risk-Sensitive Reinforcement Learning." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/bastani2022neurips-regret/)

BibTeX

@inproceedings{bastani2022neurips-regret,
  title     = {{Regret Bounds for Risk-Sensitive Reinforcement Learning}},
  author    = {Bastani, Osbert and Ma, Jason Yecheng and Shen, Estelle and Xu, Wanqiao},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/bastani2022neurips-regret/}
}