Expectation Maximization for Average Reward Decentralized POMDPs
Abstract
Planning for multiple agents under uncertainty is often based on decentralized partially observable Markov decision processes (Dec-POMDPs), but current methods must de-emphasize long-term effects of actions by a discount factor. In tasks like wireless networking, agents are evaluated by average performance over time, both short and long-term effects of actions are crucial, and discounting based solutions can perform poorly. We show that under a common set of conditions expectation maximization (EM) for average reward Dec-POMDPs is stuck in a local optimum. We introduce a new average reward EM method; it outperforms a state of the art discounted-reward Dec-POMDP method in experiments.
Cite
Text
Pajarinen and Peltonen. "Expectation Maximization for Average Reward Decentralized POMDPs." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2013. doi:10.1007/978-3-642-40988-2_9Markdown
[Pajarinen and Peltonen. "Expectation Maximization for Average Reward Decentralized POMDPs." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2013.](https://mlanthology.org/ecmlpkdd/2013/pajarinen2013ecmlpkdd-expectation/) doi:10.1007/978-3-642-40988-2_9BibTeX
@inproceedings{pajarinen2013ecmlpkdd-expectation,
title = {{Expectation Maximization for Average Reward Decentralized POMDPs}},
author = {Pajarinen, Joni and Peltonen, Jaakko},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2013},
pages = {129-144},
doi = {10.1007/978-3-642-40988-2_9},
url = {https://mlanthology.org/ecmlpkdd/2013/pajarinen2013ecmlpkdd-expectation/}
}