Incremental Multi-Step Q-Learning
Abstract
This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic-programming based reinforcement learning method, with the TD(λ) return estimation process, which is typically used in actor-critic learning, another well-known dynamic-programming based reinforcement learning method. The parameter λ is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovian effect of coarse state-space quantization. The resulting algorithm, Q(λ)-learning, thus combines some of the best features of the Q-learning and actor-critic learning paradigms. The behavior of this algorithm has been demonstrated through computer simulations.
Cite
Text
Peng and Williams. "Incremental Multi-Step Q-Learning." Machine Learning, 1996. doi:10.1023/A:1018076709321Markdown
[Peng and Williams. "Incremental Multi-Step Q-Learning." Machine Learning, 1996.](https://mlanthology.org/mlj/1996/peng1996mlj-incremental/) doi:10.1023/A:1018076709321BibTeX
@article{peng1996mlj-incremental,
title = {{Incremental Multi-Step Q-Learning}},
author = {Peng, Jing and Williams, Ronald J.},
journal = {Machine Learning},
year = {1996},
pages = {283-290},
doi = {10.1023/A:1018076709321},
volume = {22},
url = {https://mlanthology.org/mlj/1996/peng1996mlj-incremental/}
}