Learning to Coordinate Efficiently: A Model-Based Approach
Abstract
In common-interest stochastic games all players receive an identical payoff. Players participating in such games must learn to coordinate with each other in order to receive the highest-possible value. A number of reinforcement learning algorithms have been proposed for this problem, and some have been shown to converge to good solutions in the limit. In this paper we show that using very simple model-based algorithms, much better (i.e., polynomial) convergence rates can be attained. Moreover, our model-based algorithms are guaranteed to converge to the optimal value, unlike many of the existing algorithms.
Cite
Text
Brafman and Tennenholtz. "Learning to Coordinate Efficiently: A Model-Based Approach." Journal of Artificial Intelligence Research, 2003. doi:10.1613/JAIR.1154Markdown
[Brafman and Tennenholtz. "Learning to Coordinate Efficiently: A Model-Based Approach." Journal of Artificial Intelligence Research, 2003.](https://mlanthology.org/jair/2003/brafman2003jair-learning/) doi:10.1613/JAIR.1154BibTeX
@article{brafman2003jair-learning,
title = {{Learning to Coordinate Efficiently: A Model-Based Approach}},
author = {Brafman, Ronen I. and Tennenholtz, Moshe},
journal = {Journal of Artificial Intelligence Research},
year = {2003},
pages = {11-23},
doi = {10.1613/JAIR.1154},
volume = {19},
url = {https://mlanthology.org/jair/2003/brafman2003jair-learning/}
}