Distributional Reinforcement Learning for Efficient Exploration

Abstract

In distributional reinforcement learning (RL), the estimated distribution of value functions model both the parametric and intrinsic uncertainties. We propose a novel and efficient exploration method for deep RL that has two components. The first is a decaying schedule to suppress the intrinsic uncertainty. The second is an exploration bonus calculated from the upper quantiles of the learned distribution. In Atari 2600 games, our method achieves 483 % average gain across 49 games in cumulative rewards over QR-DQN. We also compared our algorithm with QR-DQN in a challenging 3D driving simulator (CARLA). Results show that our algorithm achieves nearoptimal safety rewards twice faster than QRDQN.

Cite

Text

Mavrin et al. "Distributional Reinforcement Learning for Efficient Exploration." International Conference on Machine Learning, 2019.

Markdown

[Mavrin et al. "Distributional Reinforcement Learning for Efficient Exploration." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/mavrin2019icml-distributional/)

BibTeX

@inproceedings{mavrin2019icml-distributional,
  title     = {{Distributional Reinforcement Learning for Efficient Exploration}},
  author    = {Mavrin, Borislav and Yao, Hengshuai and Kong, Linglong and Wu, Kaiwen and Yu, Yaoliang},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {4424-4434},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/mavrin2019icml-distributional/}
}