Stochastic Local Search for POMDP Controllers

Abstract

The search for finite-state controllers for partially observable Markov decision processes (POMDPs) is often based on approaches like gradient ascent, attractive because of their relatively low compu-tational cost. In this paper, we illustrate a ba-sic problem with gradient-based methods applied to POMDPs, where the sequential nature of the decision problem is at issue, and propose a new stochastic local search method as an alternative. The heuristics used in our procedure mimic the sequential reasoning inherent in optimal dynamic programming (DP) approaches. We show that our algorithm consistently finds higher quality con-trollers than gradient ascent, and is competitive with (and, for some problems, superior to) other state-of-the-art controller and DP-based algorithms on large-scale POMDPs. 1

Cite

Text

Braziunas and Boutilier. "Stochastic Local Search for POMDP Controllers." AAAI Conference on Artificial Intelligence, 2004.

Markdown

[Braziunas and Boutilier. "Stochastic Local Search for POMDP Controllers." AAAI Conference on Artificial Intelligence, 2004.](https://mlanthology.org/aaai/2004/braziunas2004aaai-stochastic/)

BibTeX

@inproceedings{braziunas2004aaai-stochastic,
  title     = {{Stochastic Local Search for POMDP Controllers}},
  author    = {Braziunas, Darius and Boutilier, Craig},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2004},
  pages     = {690-696},
  url       = {https://mlanthology.org/aaai/2004/braziunas2004aaai-stochastic/}
}