Risk Perspective Exploration in Distributional Reinforcement Learning
Abstract
Distributional reinforcement learning demonstrates state-of-the-art performance in continuous and discrete control settings with the features of variance and risk, which can be used to explore. However, the exploration method employing the risk property is hard to find, although numerous exploration methods in Distributional RL employ the variance of return distribution per action. In this paper, we present risk scheduling approaches that explore risk levels and optimistic behaviors from a risk perspective. We demonstrate the performance enhancement of the DMIX algorithm using risk scheduling in a multi-agent setting with comprehensive experiments.
Cite
Text
Oh et al. "Risk Perspective Exploration in Distributional Reinforcement Learning." ICML 2022 Workshops: AI4ABM, 2022.Markdown
[Oh et al. "Risk Perspective Exploration in Distributional Reinforcement Learning." ICML 2022 Workshops: AI4ABM, 2022.](https://mlanthology.org/icmlw/2022/oh2022icmlw-risk/)BibTeX
@inproceedings{oh2022icmlw-risk,
title = {{Risk Perspective Exploration in Distributional Reinforcement Learning}},
author = {Oh, Jihwan and Kim, Joonkee and Yun, Se-Young},
booktitle = {ICML 2022 Workshops: AI4ABM},
year = {2022},
url = {https://mlanthology.org/icmlw/2022/oh2022icmlw-risk/}
}