Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach

Abstract

We study function approximation for episodic reinforcement learning with entropic risk measure. We first propose an algorithm with linear function approximation. Compared to existing algorithms, which suffer from improper regularization and regression biases, this algorithm features debiasing transformations in backward induction and regression procedures. We further propose an algorithm with general function approximation, which features implicit debiasing transformations. We prove that both algorithms achieve a sublinear regret and demonstrate a trade-off between generality and efficiency. Our analysis provides a unified framework for function approximation in risk-sensitive reinforcement learning, which leads to the first sublinear regret bounds in the setting.

Cite

Text

Fei et al. "Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach." International Conference on Machine Learning, 2021.

Markdown

[Fei et al. "Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/fei2021icml-risksensitive/)

BibTeX

@inproceedings{fei2021icml-risksensitive,
  title     = {{Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach}},
  author    = {Fei, Yingjie and Yang, Zhuoran and Wang, Zhaoran},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {3198-3207},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/fei2021icml-risksensitive/}
}