Incremental Multi-Step Q-Learning

Abstract

This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic programming-based reinforcement learning method, with the TD(A) return estimation process, which is typically used in actor-critic learning, another well-known dynamic programming-based reinforcement learning method. The parameter A is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovian effect of coarse state-space quantization. The resulting algorithm, Q(λ)-learning, thus combines some of the best features of the Q-learning and actor-critic learning paradigms. The behavior of this algorithm is demonstrated through computer simulations of the standard benchmark control problem of learning to balance a pole on a cart.

Cite

Text

Peng and Williams. "Incremental Multi-Step Q-Learning." International Conference on Machine Learning, 1994. doi:10.1016/B978-1-55860-335-6.50035-0

Markdown

[Peng and Williams. "Incremental Multi-Step Q-Learning." International Conference on Machine Learning, 1994.](https://mlanthology.org/icml/1994/peng1994icml-incremental/) doi:10.1016/B978-1-55860-335-6.50035-0

BibTeX

@inproceedings{peng1994icml-incremental,
  title     = {{Incremental Multi-Step Q-Learning}},
  author    = {Peng, Jing and Williams, Ronald J.},
  booktitle = {International Conference on Machine Learning},
  year      = {1994},
  pages     = {226-232},
  doi       = {10.1016/B978-1-55860-335-6.50035-0},
  url       = {https://mlanthology.org/icml/1994/peng1994icml-incremental/}
}