A POMDP Extension with Belief-Dependent Rewards

Abstract

Partially Observable Markov Decision Processes (POMDPs) model sequential decision-making problems under uncertainty and partial observability. Unfortunately, some problems cannot be modeled with state-dependent reward functions, e.g., problems whose objective explicitly implies reducing the uncertainty on the state. To that end, we introduce rho-POMDPs, an extension of POMDPs where the reward function rho depends on the belief state. We show that, under the common assumption that rho is convex, the value function is also convex, what makes it possible to (1) approximate rho arbitrarily well with a piecewise linear and convex (PWLC) function, and (2) use state-of-the-art exact or approximate solving algorithms with limited changes.

Cite

Text

Araya et al. "A POMDP Extension with Belief-Dependent Rewards." Neural Information Processing Systems, 2010.

Markdown

[Araya et al. "A POMDP Extension with Belief-Dependent Rewards." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/araya2010neurips-pomdp/)

BibTeX

@inproceedings{araya2010neurips-pomdp,
  title     = {{A POMDP Extension with Belief-Dependent Rewards}},
  author    = {Araya, Mauricio and Buffet, Olivier and Thomas, Vincent and Charpillet, Françcois},
  booktitle = {Neural Information Processing Systems},
  year      = {2010},
  pages     = {64-72},
  url       = {https://mlanthology.org/neurips/2010/araya2010neurips-pomdp/}
}