Solving Risk-Sensitive POMDPs with and Without Cost Observations

Abstract

Partially Observable Markov Decision Processes (POMDPs) are often used to model planning problems under uncertainty. The goal in Risk-Sensitive POMDPs (RS-POMDPs) is to find a policy that maximizes the probability that the cumulative cost is within some user-defined cost threshold. In this paper, unlike existing POMDP literature, we distinguish between the two cases of whether costs can or cannot be observed and show the empirical impact of cost observations. We also introduce a new search-based algorithm to solve RS-POMDPs and show that it is faster and more scalable than existing approaches in two synthetic domains and a taxi domain generated with real-world data.

Cite

Text

Hou et al. "Solving Risk-Sensitive POMDPs with and Without Cost Observations." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10402

Markdown

[Hou et al. "Solving Risk-Sensitive POMDPs with and Without Cost Observations." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/hou2016aaai-solving/) doi:10.1609/AAAI.V30I1.10402

BibTeX

@inproceedings{hou2016aaai-solving,
  title     = {{Solving Risk-Sensitive POMDPs with and Without Cost Observations}},
  author    = {Hou, Ping and Yeoh, William and Varakantham, Pradeep},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {3138-3144},
  doi       = {10.1609/AAAI.V30I1.10402},
  url       = {https://mlanthology.org/aaai/2016/hou2016aaai-solving/}
}