Monte Carlo POMDPs

Abstract

We present a Monte Carlo algorithm for learning to act in partially observable Markov decision processes (POMDPs) with real-valued state and action spaces. Our approach uses importance sampling for representing beliefs, and Monte Carlo approximation for belief propagation. A reinforcement learning algorithm, value iteration, is employed to learn value functions over belief states. Finally, a sample(cid:173) based version of nearest neighbor is used to generalize across states. Initial empirical results suggest that our approach works well in practical applications.

Cite

Text

Thrun. "Monte Carlo POMDPs." Neural Information Processing Systems, 1999.

Markdown

[Thrun. "Monte Carlo POMDPs." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/thrun1999neurips-monte/)

BibTeX

@inproceedings{thrun1999neurips-monte,
  title     = {{Monte Carlo POMDPs}},
  author    = {Thrun, Sebastian},
  booktitle = {Neural Information Processing Systems},
  year      = {1999},
  pages     = {1064-1070},
  url       = {https://mlanthology.org/neurips/1999/thrun1999neurips-monte/}
}