LSTD with Random Projections

Abstract

We consider the problem of reinforcement learning in high-dimensional spaces when the number of features is bigger than the number of samples. In particular, we study the least-squares temporal difference (LSTD) learning algorithm when a space of low dimension is generated with a random projection from a high-dimensional space. We provide a thorough theoretical analysis of the LSTD with random projections and derive performance bounds for the resulting algorithm. We also show how the error of LSTD with random projections is propagated through the iterations of a policy iteration algorithm and provide a performance bound for the resulting least-squares policy iteration (LSPI) algorithm.

Cite

Text

Ghavamzadeh et al. "LSTD with Random Projections." Neural Information Processing Systems, 2010.

Markdown

[Ghavamzadeh et al. "LSTD with Random Projections." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/ghavamzadeh2010neurips-lstd/)

BibTeX

@inproceedings{ghavamzadeh2010neurips-lstd,
  title     = {{LSTD with Random Projections}},
  author    = {Ghavamzadeh, Mohammad and Lazaric, Alessandro and Maillard, Odalric and Munos, Rémi},
  booktitle = {Neural Information Processing Systems},
  year      = {2010},
  pages     = {721-729},
  url       = {https://mlanthology.org/neurips/2010/ghavamzadeh2010neurips-lstd/}
}