Statistics and Samples in Distributional Reinforcement Learning

Abstract

We present a unifying framework for designing and analysing distributional reinforcement learning (DRL) algorithms in terms of recursively estimating statistics of the return distribution. Our key insight is that DRL algorithms can be decomposed as the combination of some statistical estimator and a method for imputing a return distribution consistent with that set of statistics. With this new understanding, we are able to provide improved analyses of existing DRL algorithms as well as construct a new algorithm (EDRL) based upon estimation of the expectiles of the return distribution. We compare EDRL with existing methods on a variety of MDPs to illustrate concrete aspects of our analysis, and develop a deep RL variant of the algorithm, ER-DQN, which we evaluate on the Atari-57 suite of games.

Cite

Text

Rowland et al. "Statistics and Samples in Distributional Reinforcement Learning." International Conference on Machine Learning, 2019.

Markdown

[Rowland et al. "Statistics and Samples in Distributional Reinforcement Learning." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/rowland2019icml-statistics/)

BibTeX

@inproceedings{rowland2019icml-statistics,
  title     = {{Statistics and Samples in Distributional Reinforcement Learning}},
  author    = {Rowland, Mark and Dadashi, Robert and Kumar, Saurabh and Munos, Remi and Bellemare, Marc G. and Dabney, Will},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {5528-5536},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/rowland2019icml-statistics/}
}