Optimizing Memory-Bounded Controllers for Decentralized POMDPs
Abstract
We present a memory-bounded optimization approach for solving infinite-horizon decentralized POMDPs. Policies for each agent are represented by stochastic finite state controllers. We formulate the problem of optimizing these policies as a nonlinear program, leveraging powerful existing nonlinear optimization techniques for solving the problem. While existing solvers only guarantee locally optimal solutions, we show that our formulation produces higher quality controllers than the state-of-the-art approach. We also incorporate a shared source of randomness in the form of a correlation device to further increase solution quality with only a limited increase in space and time. Our experimental results show that nonlinear optimization can be used to provide high quality, concise solutions to decentralized decision problems under uncertainty.
Cite
Text
Amato et al. "Optimizing Memory-Bounded Controllers for Decentralized POMDPs." Conference on Uncertainty in Artificial Intelligence, 2007. doi:10.5555/3020488.3020489Markdown
[Amato et al. "Optimizing Memory-Bounded Controllers for Decentralized POMDPs." Conference on Uncertainty in Artificial Intelligence, 2007.](https://mlanthology.org/uai/2007/amato2007uai-optimizing/) doi:10.5555/3020488.3020489BibTeX
@inproceedings{amato2007uai-optimizing,
title = {{Optimizing Memory-Bounded Controllers for Decentralized POMDPs}},
author = {Amato, Christopher and Bernstein, Daniel S. and Zilberstein, Shlomo},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2007},
pages = {1-8},
doi = {10.5555/3020488.3020489},
url = {https://mlanthology.org/uai/2007/amato2007uai-optimizing/}
}