Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
Abstract
Using a high Update-To-Data (UTD) ratio, model-based methods have recently achieved much higher sample efficiency than previous model-free methods for continuous-action DRL benchmarks. In this paper, we introduce a simple model-free algorithm, Randomized Ensembled Double Q-Learning (REDQ), and show that its performance is just as good as, if not better than, a state-of-the-art model-based algorithm for the MuJoCo benchmark. Moreover, REDQ can achieve this performance using fewer parameters than the model-based method, and with less wall-clock run time. REDQ has three carefully integrated ingredients which allow it to achieve its high performance: (i) a UTD ratio $\gg 1$; (ii) an ensemble of Q functions; (iii) in-target minimization across a random subset of Q functions from the ensemble. Through carefully designed experiments, we provide a detailed analysis of REDQ and related model-free algorithms. To our knowledge, REDQ is the first successful model-free DRL algorithm for continuous-action spaces using a UTD ratio $\gg 1$.
Cite
Text
Chen et al. "Randomized Ensembled Double Q-Learning: Learning Fast Without a Model." International Conference on Learning Representations, 2021.Markdown
[Chen et al. "Randomized Ensembled Double Q-Learning: Learning Fast Without a Model." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/chen2021iclr-randomized/)BibTeX
@inproceedings{chen2021iclr-randomized,
title = {{Randomized Ensembled Double Q-Learning: Learning Fast Without a Model}},
author = {Chen, Xinyue and Wang, Che and Zhou, Zijian and Ross, Keith W.},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/chen2021iclr-randomized/}
}