On the Complexity of Solving Markov Decision Problems

Abstract

Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the structure of MDPs.

Cite

Text

Littman et al. "On the Complexity of Solving Markov Decision Problems." Conference on Uncertainty in Artificial Intelligence, 1995.

Markdown

[Littman et al. "On the Complexity of Solving Markov Decision Problems." Conference on Uncertainty in Artificial Intelligence, 1995.](https://mlanthology.org/uai/1995/littman1995uai-complexity/)

BibTeX

@inproceedings{littman1995uai-complexity,
  title     = {{On the Complexity of Solving Markov Decision Problems}},
  author    = {Littman, Michael L. and Dean, Thomas L. and Kaelbling, Leslie Pack},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {1995},
  pages     = {394-402},
  url       = {https://mlanthology.org/uai/1995/littman1995uai-complexity/}
}