On the Complexity of Policy Iteration
Abstract
Decision-making problems in uncertain or stochastic domains are often formulated as Markov decision processes (MDPs). Policy iteration (PI) is a popular algorithm for searching over policy-space, the size of which is exponential in the number of states. We are interested in bounds on the complexity of PI that do not depend on the value of the discount factor. In this paper we prove the first such non-trivial, worst-case, upper bounds on the number of iterations required by PI to converge to the optimal policy. Our analysis also sheds new light on the manner in which PI progresses through the space of policies.
Cite
Text
Mansour and Singh. "On the Complexity of Policy Iteration." Conference on Uncertainty in Artificial Intelligence, 1999.Markdown
[Mansour and Singh. "On the Complexity of Policy Iteration." Conference on Uncertainty in Artificial Intelligence, 1999.](https://mlanthology.org/uai/1999/mansour1999uai-complexity/)BibTeX
@inproceedings{mansour1999uai-complexity,
title = {{On the Complexity of Policy Iteration}},
author = {Mansour, Yishay and Singh, Satinder},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {1999},
pages = {401-408},
url = {https://mlanthology.org/uai/1999/mansour1999uai-complexity/}
}