Lifting the Veil on Hyper-Parameters for Value-Based Deep Reinforcement Learning

Abstract

Successful applications of deep reinforcement learning (deep RL) combine algorithmic design and careful hyper-parameter selection. The former often comes from iterative improvements over existing algorithms, while the latter is either inherited from prior methods or tuned for the specific method being introduced. Although critical to a method's performance, the effect of the various hyper-parameter choices are often overlooked in favour of algorithmic advances. In this paper, we perform an initial empirical investigation into a number of often-overlooked hyper-parameters for value-based deep RL agents, demonstrating their varying levels of importance. We conduct this study on a varied set of classic control environments which helps highlight the effect each environment has on an algorithm's hyper-parameter sensitivity.

Cite

Text

Araújo et al. "Lifting the Veil on Hyper-Parameters for Value-Based Deep Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.

Markdown

[Araújo et al. "Lifting the Veil on Hyper-Parameters for Value-Based Deep Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/araujo2021neuripsw-lifting/)

BibTeX

@inproceedings{araujo2021neuripsw-lifting,
  title     = {{Lifting the Veil on Hyper-Parameters for Value-Based Deep Reinforcement Learning}},
  author    = {Araújo, João Guilherme Madeira and Ceron, Johan Samir Obando and Castro, Pablo Samuel},
  booktitle = {NeurIPS 2021 Workshops: DeepRL},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/araujo2021neuripsw-lifting/}
}