Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method

Abstract

This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.

Cite

Text

Riedmiller. "Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method." European Conference on Machine Learning, 2005. doi:10.1007/11564096_32

Markdown

[Riedmiller. "Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method." European Conference on Machine Learning, 2005.](https://mlanthology.org/ecmlpkdd/2005/riedmiller2005ecml-neural/) doi:10.1007/11564096_32

BibTeX

@inproceedings{riedmiller2005ecml-neural,
  title     = {{Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method}},
  author    = {Riedmiller, Martin A.},
  booktitle = {European Conference on Machine Learning},
  year      = {2005},
  pages     = {317-328},
  doi       = {10.1007/11564096_32},
  url       = {https://mlanthology.org/ecmlpkdd/2005/riedmiller2005ecml-neural/}
}