Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method
Abstract
There is much interest in using partially observable Markov decision processes (POMDPs) as a formal model for planning in stochastic domains. This paper is concerned with finding optimal policies for POMDPs. We propose several improvements to incremental pruning, presently the most efficient exact algorithm for solving POMDPs.
Cite
Text
Zhang and Lee. "Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method." Conference on Uncertainty in Artificial Intelligence, 1998.Markdown
[Zhang and Lee. "Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method." Conference on Uncertainty in Artificial Intelligence, 1998.](https://mlanthology.org/uai/1998/zhang1998uai-planning/)BibTeX
@inproceedings{zhang1998uai-planning,
title = {{Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method}},
author = {Zhang, Nevin Lianwen and Lee, Stephen S.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {1998},
pages = {523-530},
url = {https://mlanthology.org/uai/1998/zhang1998uai-planning/}
}