Ensemble Bootstrapping for Q-Learning
Abstract
Q-learning (QL), a common reinforcement learning algorithm, suffers from over-estimation bias due to the maximization term in the optimal Bellman operator. This bias may lead to sub-optimal behavior. Double-Q-learning tackles this issue by utilizing two estimators, yet results in an under-estimation bias. Similar to over-estimation in Q-learning, in certain scenarios, the under-estimation bias may degrade performance. In this work, we introduce a new bias-reduced algorithm called Ensemble Bootstrapped Q-Learning (EBQL), a natural extension of Double-Q-learning to ensembles. We analyze our method both theoretically and empirically. Theoretically, we prove that EBQL-like updates yield lower MSE when estimating the maximal mean of a set of independent random variables. Empirically, we show that there exist domains where both over and under-estimation result in sub-optimal performance. Finally, We demonstrate the superior performance of a deep RL variant of EBQL over other deep QL algorithms for a suite of ATARI games.
Cite
Text
Peer et al. "Ensemble Bootstrapping for Q-Learning." International Conference on Machine Learning, 2021.Markdown
[Peer et al. "Ensemble Bootstrapping for Q-Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/peer2021icml-ensemble/)BibTeX
@inproceedings{peer2021icml-ensemble,
title = {{Ensemble Bootstrapping for Q-Learning}},
author = {Peer, Oren and Tessler, Chen and Merlis, Nadav and Meir, Ron},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {8454-8463},
volume = {139},
url = {https://mlanthology.org/icml/2021/peer2021icml-ensemble/}
}