Solving POMDPs by Searching in Policy Space
Abstract
Most algorithms for solving POMDPs iteratively improve a value function that implicitly represents a policy and are said to search in value function space. This paper presents an approach to solving POMDPs that represents a policy explicitly as a finite-state controller and iteratively improves the controller by search in policy space. Two related algorithms illustrate this approach. The first is a policy iteration algorithm that can outperform value iteration in solving infinitehorizon POMDPs. It provides the foundation for a new heuristic search algorithm that promises further speedup by focusing computational effort on regions of the problem space that are reachable, or likely to be reached, from a start state.
Cite
Text
Hansen. "Solving POMDPs by Searching in Policy Space." Conference on Uncertainty in Artificial Intelligence, 1998.Markdown
[Hansen. "Solving POMDPs by Searching in Policy Space." Conference on Uncertainty in Artificial Intelligence, 1998.](https://mlanthology.org/uai/1998/hansen1998uai-solving/)BibTeX
@inproceedings{hansen1998uai-solving,
title = {{Solving POMDPs by Searching in Policy Space}},
author = {Hansen, Eric A.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {1998},
pages = {211-219},
url = {https://mlanthology.org/uai/1998/hansen1998uai-solving/}
}