Rainbow: Combining Improvements in Deep Reinforcement Learning
Abstract
The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.
Cite
Text
Hessel et al. "Rainbow: Combining Improvements in Deep Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11796Markdown
[Hessel et al. "Rainbow: Combining Improvements in Deep Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/hessel2018aaai-rainbow/) doi:10.1609/AAAI.V32I1.11796BibTeX
@inproceedings{hessel2018aaai-rainbow,
title = {{Rainbow: Combining Improvements in Deep Reinforcement Learning}},
author = {Hessel, Matteo and Modayil, Joseph and van Hasselt, Hado and Schaul, Tom and Ostrovski, Georg and Dabney, Will and Horgan, Dan and Piot, Bilal and Azar, Mohammad Gheshlaghi and Silver, David},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {3215-3222},
doi = {10.1609/AAAI.V32I1.11796},
url = {https://mlanthology.org/aaai/2018/hessel2018aaai-rainbow/}
}