Anytime State-Based Solution Methods for Decision Processes with Non-Markovian Rewards

Abstract

A popular approach to solving a decision process with non-Markovian rewards (NMRDP) is to exploit a compact representation of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to our favorite MDP solution method. The contribution of this paper is a representation of non-Markovian reward functions and a translation into MDP aimed at making the best possible use of state-based anytime algorithms as the solution method. By explicitly constructing and exploring only parts of the state space, these algorithms are able to trade computation time for policy quality, and have proven quite effective in dealing with large MDPs. Our representation extends future linear temporal logic (FLTL) to express rewards. Our translation has the effect of embedding modelchecking in the solution method. It results in an MDP of the minimal size achievable without stepping outside the anytime framework, and consequently in better policies by the deadline.

Cite

Text

Thiébaux et al. "Anytime State-Based Solution Methods for Decision Processes with Non-Markovian Rewards." Conference on Uncertainty in Artificial Intelligence, 2002.

Markdown

[Thiébaux et al. "Anytime State-Based Solution Methods for Decision Processes with Non-Markovian Rewards." Conference on Uncertainty in Artificial Intelligence, 2002.](https://mlanthology.org/uai/2002/thiebaux2002uai-anytime/)

BibTeX

@inproceedings{thiebaux2002uai-anytime,
  title     = {{Anytime State-Based Solution Methods for Decision Processes with Non-Markovian Rewards}},
  author    = {Thiébaux, Sylvie and Kabanza, Froduald and Slaney, John K.},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2002},
  pages     = {501-510},
  url       = {https://mlanthology.org/uai/2002/thiebaux2002uai-anytime/}
}