Analysis of Q-Learning with Adaptation and Momentum Restart for Gradient Descent
Abstract
Existing convergence analyses of Q-learning mostly focus on the vanilla stochastic gradient descent (SGD) type of updates. Despite the Adaptive Moment Estimation (Adam) has been commonly used for practical Q-learning algorithms, there has not been any convergence guarantee provided for Q-learning with such type of updates. In this paper, we first characterize the convergence rate for Q-AMSGrad, which is the Q-learning algorithm with AMSGrad update (a commonly adopted alternative of Adam for theoretical analysis). To further improve the performance, we propose to incorporate the momentum restart scheme to Q-AMSGrad, resulting in the so-called Q-AMSGradR algorithm. The convergence rate of Q-AMSGradR is also established. Our experiments on a linear quadratic regulator problem demonstrate that the two proposed Q-learning algorithms outperform the vanilla Q-learning with SGD updates. The two algorithms also exhibit significantly better performance than the DQN learning method over a batch of Atari 2600 games.
Cite
Text
Weng et al. "Analysis of Q-Learning with Adaptation and Momentum Restart for Gradient Descent." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/422Markdown
[Weng et al. "Analysis of Q-Learning with Adaptation and Momentum Restart for Gradient Descent." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/weng2020ijcai-analysis/) doi:10.24963/IJCAI.2020/422BibTeX
@inproceedings{weng2020ijcai-analysis,
title = {{Analysis of Q-Learning with Adaptation and Momentum Restart for Gradient Descent}},
author = {Weng, Bowen and Xiong, Huaqing and Liang, Yingbin and Zhang, Wei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {3051-3057},
doi = {10.24963/IJCAI.2020/422},
url = {https://mlanthology.org/ijcai/2020/weng2020ijcai-analysis/}
}