Exploiting Structure in Policy Construction

Abstract

Markov decision processes (MDPs) have recently been applied to the problem of modeling decisiontheoretic planning. While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable. We present an algorithm, called structured policy iteration (SPI), that constructs optimal policies without explicit enumeration of the state space. The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but exploits the variable and propositionalindependencies reflected in a temporal Bayesian network representation of MDPs. The principles behind SPI can be applied to any structured representation of stochastic actions, policies and value functions, and the algorithm itself can be used in conjunction with recent approximation methods. 1 Introduction Increasingly research in planning has been directed towards problems in which the initial conditions and the e...

Cite

Text

Boutilier et al. "Exploiting Structure in Policy Construction." International Joint Conference on Artificial Intelligence, 1995.

Markdown

[Boutilier et al. "Exploiting Structure in Policy Construction." International Joint Conference on Artificial Intelligence, 1995.](https://mlanthology.org/ijcai/1995/boutilier1995ijcai-exploiting/)

BibTeX

@inproceedings{boutilier1995ijcai-exploiting,
  title     = {{Exploiting Structure in Policy Construction}},
  author    = {Boutilier, Craig and Dearden, Richard and Goldszmidt, Moisés},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1995},
  pages     = {1104-1113},
  url       = {https://mlanthology.org/ijcai/1995/boutilier1995ijcai-exploiting/}
}