Implicit Quantile Networks for Distributional Reinforcement Learning
Abstract
In this work, we build on recent advances in distributional reinforcement learning to give a generally applicable, flexible, and state-of-the-art distributional variant of DQN. We achieve this by using quantile regression to approximate the full quantile function for the state-action return distribution. By reparameterizing a distribution over the sample space, this yields an implicitly defined return distribution and gives rise to a large class of risk-sensitive policies. We demonstrate improved performance on the 57 Atari 2600 games in the ALE, and use our algorithm’s implicitly defined distributions to study the effects of risk-sensitive policies in Atari games.
Cite
Text
Dabney et al. "Implicit Quantile Networks for Distributional Reinforcement Learning." International Conference on Machine Learning, 2018.Markdown
[Dabney et al. "Implicit Quantile Networks for Distributional Reinforcement Learning." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/dabney2018icml-implicit/)BibTeX
@inproceedings{dabney2018icml-implicit,
title = {{Implicit Quantile Networks for Distributional Reinforcement Learning}},
author = {Dabney, Will and Ostrovski, Georg and Silver, David and Munos, Remi},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {1096-1105},
volume = {80},
url = {https://mlanthology.org/icml/2018/dabney2018icml-implicit/}
}