Optimizing Expectation with Guarantees in POMDPs

Abstract

A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this threshold constraint. In this work we go beyond both the “expectation” and “threshold” approaches and consider a “guaranteed payoff optimization (GPO)” problem for POMDPs, where we are given a threshold t and the objective is to find a policy σ such that a) each possible outcome of σ yields a discounted-sum payoff of at least t, and b) the expected discounted-sum payoff of σ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks.

Cite

Text

Chatterjee et al. "Optimizing Expectation with Guarantees in POMDPs." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.11046

Markdown

[Chatterjee et al. "Optimizing Expectation with Guarantees in POMDPs." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/chatterjee2017aaai-optimizing/) doi:10.1609/AAAI.V31I1.11046

BibTeX

@inproceedings{chatterjee2017aaai-optimizing,
  title     = {{Optimizing Expectation with Guarantees in POMDPs}},
  author    = {Chatterjee, Krishnendu and Novotný, Petr and Pérez, Guillermo A. and Raskin, Jean-François and Zikelic, Dorde},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {3725-3732},
  doi       = {10.1609/AAAI.V31I1.11046},
  url       = {https://mlanthology.org/aaai/2017/chatterjee2017aaai-optimizing/}
}