Quasi Newton Temporal Difference Learning
Abstract
Fast convergent and computationally inexpensive policy evaluation is an essential part of reinforcement learning algorithms based on policy iteration. Algorithms such as LSTD, LSPE, FPKF and NTD, have faster convergence rate but they are computationally slow. On the other hand, there are algorithms that are computationally fast but with slower convergence rate, among them are TD, RG, GTD2 and TDC. This paper presents a regularized Quasi Newton Temporal Difference learning algorithm which uses the second-order information while maintaining a fast convergence rate. In simple language, we combine the idea of TD learning with Quasi Newton algorithm SGD-QN. We explore the development of QNTD algorithm and discuss its convergence properties. We support our ideas with empirical results on 4 standard benchmarks in reinforcement learning literature with 2 small problems, Random Walk and Boyan chain and 2 bigger problems, cart-pole and linked-pole balancing. Empirical studies show that QNTD speeds up convergence and provides better accuracy in comparison to the conventional TD.
Cite
Text
Givchi and Palhang. "Quasi Newton Temporal Difference Learning." Proceedings of the Sixth Asian Conference on Machine Learning, 2014.Markdown
[Givchi and Palhang. "Quasi Newton Temporal Difference Learning." Proceedings of the Sixth Asian Conference on Machine Learning, 2014.](https://mlanthology.org/acml/2014/givchi2014acml-quasi/)BibTeX
@inproceedings{givchi2014acml-quasi,
title = {{Quasi Newton Temporal Difference Learning}},
author = {Givchi, Arash and Palhang, Maziar},
booktitle = {Proceedings of the Sixth Asian Conference on Machine Learning},
year = {2014},
pages = {159-172},
volume = {39},
url = {https://mlanthology.org/acml/2014/givchi2014acml-quasi/}
}