Cascaded Gaps: Towards Logarithmic Regret for Risk-Sensitive Reinforcement Learning
Abstract
In this paper, we study gap-dependent regret guarantees for risk-sensitive reinforcement learning based on the entropic risk measure. We propose a novel definition of sub-optimality gaps, which we call cascaded gaps, and we discuss their key components that adapt to underlying structures of the problem. Based on the cascaded gaps, we derive non-asymptotic and logarithmic regret bounds for two model-free algorithms under episodic Markov decision processes. We show that, in appropriate settings, these bounds feature exponential improvement over existing ones that are independent of gaps. We also prove gap-dependent lower bounds, which certify the near optimality of the upper bounds.
Cite
Text
Fei and Xu. "Cascaded Gaps: Towards Logarithmic Regret for Risk-Sensitive Reinforcement Learning." International Conference on Machine Learning, 2022.Markdown
[Fei and Xu. "Cascaded Gaps: Towards Logarithmic Regret for Risk-Sensitive Reinforcement Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/fei2022icml-cascaded/)BibTeX
@inproceedings{fei2022icml-cascaded,
title = {{Cascaded Gaps: Towards Logarithmic Regret for Risk-Sensitive Reinforcement Learning}},
author = {Fei, Yingjie and Xu, Ruitu},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {6392-6417},
volume = {162},
url = {https://mlanthology.org/icml/2022/fei2022icml-cascaded/}
}