Bounded Real-Time Dynamic Programming: RTDP with Monotone Upper Bounds and Performance Guarantees

Abstract

MDPs are an attractive formalization for planning, but realistic problems often have intractably large state spaces. When we only need a partial policy to get from a fixed start state to a goal, restricting computation to states relevant to this task can make much larger problems tractable. We introduce a new algorithm, Bounded RTDP, which can produce partial policies with strong performance guarantees while only touching a fraction of the state space, even on problems where other algorithms would have to visit the full state space. To do so, Bounded RTDP maintains both upper and lower bounds on the optimal value function. The performance of Bounded RTDP is greatly aided by the introduction of a new technique to efficiently find suitable upper bounds; this technique can also be used to provide informed initialization to a wide range of other planning algorithms.

Cite

Text

McMahan et al. "Bounded Real-Time Dynamic Programming: RTDP with Monotone Upper Bounds and Performance Guarantees." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102423

Markdown

[McMahan et al. "Bounded Real-Time Dynamic Programming: RTDP with Monotone Upper Bounds and Performance Guarantees." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/mcmahan2005icml-bounded/) doi:10.1145/1102351.1102423

BibTeX

@inproceedings{mcmahan2005icml-bounded,
  title     = {{Bounded Real-Time Dynamic Programming: RTDP with Monotone Upper Bounds and Performance Guarantees}},
  author    = {McMahan, H. Brendan and Likhachev, Maxim and Gordon, Geoffrey J.},
  booktitle = {International Conference on Machine Learning},
  year      = {2005},
  pages     = {569-576},
  doi       = {10.1145/1102351.1102423},
  url       = {https://mlanthology.org/icml/2005/mcmahan2005icml-bounded/}
}