Using Predictive Representations to Improve Generalization in Reinforcement Learning
Abstract
The predictive representations hypothesis holds that particularly good generalization will result from representing the state of the world in terms of predictions about possible future experience. This hypothesis has been a central motivation behind recent research in, for example, PSRs and TD networks. In this paper we present the first explicit investigation of this hypothesis. We show in a reinforcement-learning example (a grid-world navigation task) that a predictive representation in tabular form can learn much faster than both the tabular explicit-state representation and a tabular history-based method. 1
Cite
Text
Rafols et al. "Using Predictive Representations to Improve Generalization in Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2005.Markdown
[Rafols et al. "Using Predictive Representations to Improve Generalization in Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2005.](https://mlanthology.org/ijcai/2005/rafols2005ijcai-using/)BibTeX
@inproceedings{rafols2005ijcai-using,
title = {{Using Predictive Representations to Improve Generalization in Reinforcement Learning}},
author = {Rafols, Eddie J. and Ring, Mark B. and Sutton, Richard S. and Tanner, Brian},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2005},
pages = {835-840},
url = {https://mlanthology.org/ijcai/2005/rafols2005ijcai-using/}
}