Dropout Q-Functions for Doubly Efficient Reinforcement Learning

Abstract

Randomized ensembled double Q-learning (REDQ) (Chen et al., 2021b) has recently achieved state-of-the-art sample efficiency on continuous-action reinforcement learning benchmarks. This superior sample efficiency is made possible by using a large Q-function ensemble. However, REDQ is much less computationally efficient than non-ensemble counterparts such as Soft Actor-Critic (SAC) (Haarnoja et al., 2018a). To make REDQ more computationally efficient, we propose a method of improving computational efficiency called DroQ, which is a variant of REDQ that uses a small ensemble of dropout Q-functions. Our dropout Q-functions are simple Q-functions equipped with dropout connection and layer normalization. Despite its simplicity of implementation, our experimental results indicate that DroQ is doubly (sample and computationally) efficient. It achieved comparable sample efficiency with REDQ, much better computational efficiency than REDQ, and comparable computational efficiency with that of SAC.

Cite

Text

Hiraoka et al. "Dropout Q-Functions for Doubly Efficient Reinforcement Learning." International Conference on Learning Representations, 2022.

Markdown

[Hiraoka et al. "Dropout Q-Functions for Doubly Efficient Reinforcement Learning." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/hiraoka2022iclr-dropout/)

BibTeX

@inproceedings{hiraoka2022iclr-dropout,
  title     = {{Dropout Q-Functions for Doubly Efficient Reinforcement Learning}},
  author    = {Hiraoka, Takuya and Imagawa, Takahisa and Hashimoto, Taisei and Onishi, Takashi and Tsuruoka, Yoshimasa},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/hiraoka2022iclr-dropout/}
}