Solving POMDPs with Continuous or Large Discrete Observation Spaces
Abstract
We describe methods to solve partially observable Markov decision processes (POMDPs) with continuous or large discrete observation spaces. Realistic problems often have rich observation spaces, posing significant problems for standard POMDP algorithms that require explicit enumeration of the observations. This problem is usually approached by imposing an a priori discretisation on the observation space, which can be sub-optimal for the decision making task. However, since only those observations that would change the policy need to be distinguished, the decision problem itself induces a lossless partitioning of the observation space. This paper demonstrates how to find this partition while computing a policy, and how the resulting discretisation of the observation space reveals the relevant features of the application domain. The algorithms are demonstrated on a toy example and on a realistic assisted living task. 1
Cite
Text
Hoey and Poupart. "Solving POMDPs with Continuous or Large Discrete Observation Spaces." International Joint Conference on Artificial Intelligence, 2005.Markdown
[Hoey and Poupart. "Solving POMDPs with Continuous or Large Discrete Observation Spaces." International Joint Conference on Artificial Intelligence, 2005.](https://mlanthology.org/ijcai/2005/hoey2005ijcai-solving/)BibTeX
@inproceedings{hoey2005ijcai-solving,
title = {{Solving POMDPs with Continuous or Large Discrete Observation Spaces}},
author = {Hoey, Jesse and Poupart, Pascal},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2005},
pages = {1332-1338},
url = {https://mlanthology.org/ijcai/2005/hoey2005ijcai-solving/}
}