Planning with Durative Actions in Stochastic Domains

Abstract

Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative actions. This poses severe restrictions in modeling and solving a real world planning problem. We extend the MDP model to incorporate-1) simultaneous action execution, 2) durative actions, and 3) stochastic durations. We develop several algorithms to combat the computational explosion introduced by these features. The key theoretical ideas used in building these algorithms are -- modeling a complex problem as an MDP in extended state/action space, pruning of irrelevant actions, sampling of relevant actions, using informed heuristics to guide the search, hybridizing different planners to achieve benefits of both, approximating the problem and replanning. Our empirical evaluation illuminates the different merits in using various algorithms, viz., optimality, empirical closeness to optimality, theoretical error bounds, and speed.

Cite

Text

Mausam and Weld. "Planning with Durative Actions in Stochastic Domains." Journal of Artificial Intelligence Research, 2008. doi:10.1613/JAIR.2269

Markdown

[Mausam and Weld. "Planning with Durative Actions in Stochastic Domains." Journal of Artificial Intelligence Research, 2008.](https://mlanthology.org/jair/2008/mausam2008jair-planning/) doi:10.1613/JAIR.2269

BibTeX

@article{mausam2008jair-planning,
  title     = {{Planning with Durative Actions in Stochastic Domains}},
  author    = {Mausam,  and Weld, Daniel S.},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2008},
  pages     = {33-82},
  doi       = {10.1613/JAIR.2269},
  volume    = {31},
  url       = {https://mlanthology.org/jair/2008/mausam2008jair-planning/}
}