Memory-Effcient Symbolic Online Planning for Factored MDPs

Abstract

Factored Markov Decision Processes (MDP) are a de facto standard for compactly modeling sequential decision making problems with uncertainty. Offline planning based on symbolic operators exploits the factored structure of MDPs, but is memory intensive. We present new memory-efficient symbolic operators for online planning that effectively generalize experience. The soundness of the operators and convergence of the planning algorithms are shown followed by experiments that demonstrate superior scalability in benchmark problems.

Cite

Text

Raghavan et al. "Memory-Effcient Symbolic Online Planning for Factored MDPs." Conference on Uncertainty in Artificial Intelligence, 2015.

Markdown

[Raghavan et al. "Memory-Effcient Symbolic Online Planning for Factored MDPs." Conference on Uncertainty in Artificial Intelligence, 2015.](https://mlanthology.org/uai/2015/raghavan2015uai-memory/)

BibTeX

@inproceedings{raghavan2015uai-memory,
  title     = {{Memory-Effcient Symbolic Online Planning for Factored MDPs}},
  author    = {Raghavan, Aswin and Khardon, Roni and Tadepalli, Prasad and Fern, Alan},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2015},
  pages     = {732-741},
  url       = {https://mlanthology.org/uai/2015/raghavan2015uai-memory/}
}