The Nature of Temporal Difference Errors in Multi-Step Distributional Reinforcement Learning
Abstract
We study the multi-step off-policy learning approach to distributional RL. Despite the apparent similarity between value-based RL and distributional RL, our study reveals intriguing and fundamental differences between the two cases in the multi-step setting. We identify a novel notion of path-dependent distributional TD error, which is indispensable for principled multi-step distributional RL. The distinction from the value-based case bears important implications on concepts such as backward-view algorithms. Our work provides the first theoretical guarantees on multi-step off-policy distributional RL algorithms, including results that apply to the small number of existing approaches to multi-step distributional RL. In addition, we derive a novel algorithm, Quantile Regression-Retrace, which leads to a deep RL agent QR-DQN-Retrace that shows empirical improvements over QR-DQN on the Atari-57 benchmark. Collectively, we shed light on how unique challenges in multi-step distributional RL can be addressed both in theory and practice.
Cite
Text
Tang et al. "The Nature of Temporal Difference Errors in Multi-Step Distributional Reinforcement Learning." Neural Information Processing Systems, 2022.Markdown
[Tang et al. "The Nature of Temporal Difference Errors in Multi-Step Distributional Reinforcement Learning." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/tang2022neurips-nature/)BibTeX
@inproceedings{tang2022neurips-nature,
title = {{The Nature of Temporal Difference Errors in Multi-Step Distributional Reinforcement Learning}},
author = {Tang, Yunhao and Munos, Remi and Rowland, Mark and Pires, Bernardo Avila and Dabney, Will and Bellemare, Marc},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/tang2022neurips-nature/}
}