Approximate Linear Programming for Constrained Partially Observable Markov Decision Processes

Abstract

In many situations, it is desirable to optimize a sequence of decisions by maximizing a primary objective while respecting some constraints with respect to secondary objectives. Such problems can be naturally modeled as constrained partially observable Markov decision processes (CPOMDPs) when the environment is partially observable. In this work, we describe a technique based on approximate linear programming to optimize policies in CPOMDPs. The optimization is performed offline and produces a finite state controller with desirable performance guarantees. The approach outperforms a constrained version of point-based value iteration on a suite of benchmark problems.

Cite

Text

Poupart et al. "Approximate Linear Programming for Constrained Partially Observable Markov Decision Processes." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9655

Markdown

[Poupart et al. "Approximate Linear Programming for Constrained Partially Observable Markov Decision Processes." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/poupart2015aaai-approximate/) doi:10.1609/AAAI.V29I1.9655

BibTeX

@inproceedings{poupart2015aaai-approximate,
  title     = {{Approximate Linear Programming for Constrained Partially Observable Markov Decision Processes}},
  author    = {Poupart, Pascal and Malhotra, Aarti and Pei, Pei and Kim, Kee-Eung and Goh, Bongseok and Bowling, Michael},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {3342-3348},
  doi       = {10.1609/AAAI.V29I1.9655},
  url       = {https://mlanthology.org/aaai/2015/poupart2015aaai-approximate/}
}