Qualitative Possibilistic Mixed-Observable MDPs
Abstract
Possibilistic and qualitative POMDPs (π-POMDPs) are counterparts of POMDPs used to model situations where the agent's initial belief or observation probabilities are imprecise due to lack of past experiences or insufficient data collection. However, like probabilistic POMDPs, optimally solving π-POMDPs is intractable: the finite belief state space exponentially grows with the number of system's states. In this paper, a possibilistic version of Mixed-Observable MDPs is presented to get around this issue: the complexity of solving π-POMDPs, some state variables of which are fully observable, can be then dramatically reduced. A value iteration algorithm for this new formulation under infinite horizon is next proposed and the optimality of the returned policy (for a specified criterion) is shown assuming the existence of a "stay" action in some goal states. Experimental work finally shows that this possibilistic model outperforms probabilistic POMDPs commonly used in robotics, for a target recognition problem where the agent's observations are imprecise.
Cite
Text
Drougard et al. "Qualitative Possibilistic Mixed-Observable MDPs." Conference on Uncertainty in Artificial Intelligence, 2013.Markdown
[Drougard et al. "Qualitative Possibilistic Mixed-Observable MDPs." Conference on Uncertainty in Artificial Intelligence, 2013.](https://mlanthology.org/uai/2013/drougard2013uai-qualitative/)BibTeX
@inproceedings{drougard2013uai-qualitative,
title = {{Qualitative Possibilistic Mixed-Observable MDPs}},
author = {Drougard, Nicolas and Teichteil-Königsbuch, Florent and Farges, Jean-Loup and Dubois, Didier},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2013},
url = {https://mlanthology.org/uai/2013/drougard2013uai-qualitative/}
}