Fitted Q-Iteration in Continuous Action-Space MDPs

Abstract

We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by another policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous theoretical analysis of this algorithm, proving what we believe is the first finite-time bounds for value-function based algorithms for continuous state- and action-space problems.

Cite

Text

Antos et al. "Fitted Q-Iteration in Continuous Action-Space MDPs." Neural Information Processing Systems, 2007.

Markdown

[Antos et al. "Fitted Q-Iteration in Continuous Action-Space MDPs." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/antos2007neurips-fitted/)

BibTeX

@inproceedings{antos2007neurips-fitted,
  title     = {{Fitted Q-Iteration in Continuous Action-Space MDPs}},
  author    = {Antos, András and Szepesvári, Csaba and Munos, Rémi},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {9-16},
  url       = {https://mlanthology.org/neurips/2007/antos2007neurips-fitted/}
}