The Convergence of TD(lambda) for General Lambda

Abstract

The method of temporal differences (TD) is one way of making consistent predictions about the future. This paper uses some analysis of Watkins (1989) to extend a convergence theorem due to Sutton (1988) from the case which only uses information from adjacent time steps to that involving information from arbitrary ones. It also considers how this version of TD behaves in the face of linearly dependent representations for states—demonstrating that it still converges, but to a different answer from the least mean squares algorithm. Finally it adapts Watkins' theorem that Q-learning, his closely related prediction and action learning method, converges with probability one, to demonstrate this strong form of convergence for a slightly modified version of TD.

Cite

Text

Dayan. "The Convergence of TD(lambda) for General Lambda." Machine Learning, 1992. doi:10.1007/BF00992701

Markdown

[Dayan. "The Convergence of TD(lambda) for General Lambda." Machine Learning, 1992.](https://mlanthology.org/mlj/1992/dayan1992mlj-convergence/) doi:10.1007/BF00992701

BibTeX

@article{dayan1992mlj-convergence,
  title     = {{The Convergence of TD(lambda) for General Lambda}},
  author    = {Dayan, Peter},
  journal   = {Machine Learning},
  year      = {1992},
  pages     = {341-362},
  doi       = {10.1007/BF00992701},
  volume    = {8},
  url       = {https://mlanthology.org/mlj/1992/dayan1992mlj-convergence/}
}