Stable Function Approximation in Dynamic Programming

Abstract

The success of reinforcement learning in practical problems depends on the ability to combine function approximation with temporal difference methods such as value iteration. Experiments in this area have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difficulty of reasoning about function approximators that generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal difference methods involving function approximators such as k-nearest-neighbor, and show experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of fitted value iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a different environment.

Cite

Text

Gordon. "Stable Function Approximation in Dynamic Programming." International Conference on Machine Learning, 1995. doi:10.1016/B978-1-55860-377-6.50040-2

Markdown

[Gordon. "Stable Function Approximation in Dynamic Programming." International Conference on Machine Learning, 1995.](https://mlanthology.org/icml/1995/gordon1995icml-stable/) doi:10.1016/B978-1-55860-377-6.50040-2

BibTeX

@inproceedings{gordon1995icml-stable,
  title     = {{Stable Function Approximation in Dynamic Programming}},
  author    = {Gordon, Geoffrey J.},
  booktitle = {International Conference on Machine Learning},
  year      = {1995},
  pages     = {261-268},
  doi       = {10.1016/B978-1-55860-377-6.50040-2},
  url       = {https://mlanthology.org/icml/1995/gordon1995icml-stable/}
}