Optimistic Planning in Markov Decision Processes Using a Generative Model

Abstract

We consider the problem of online planning in a Markov decision process with discounted rewards for any given initial state. We consider the PAC sample complexity problem of computing, with probability $1-\delta$, an $\epsilon$-optimal action using the smallest possible number of calls to the generative model (which provides reward and next-state samples). We design an algorithm, called StOP (for Stochastic-Optimistic Planning), based on the optimism in the face of uncertainty" principle. StOP can be used in the general setting, requires only a generative model, and enjoys a complexity bound that only depends on the local structure of the MDP."

Cite

Text

Szörényi et al. "Optimistic Planning in Markov Decision Processes Using a Generative Model." Neural Information Processing Systems, 2014.

Markdown

[Szörényi et al. "Optimistic Planning in Markov Decision Processes Using a Generative Model." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/szorenyi2014neurips-optimistic/)

BibTeX

@inproceedings{szorenyi2014neurips-optimistic,
  title     = {{Optimistic Planning in Markov Decision Processes Using a Generative Model}},
  author    = {Szörényi, Balázs and Kedenburg, Gunnar and Munos, Remi},
  booktitle = {Neural Information Processing Systems},
  year      = {2014},
  pages     = {1035-1043},
  url       = {https://mlanthology.org/neurips/2014/szorenyi2014neurips-optimistic/}
}