Correlated Q-Learning
Abstract
This paper introduces Correlated-Q (CE-Q) learning, a multiagent Q-learning algorithm based on the correlated equilibrium (CE) solution concept. CE-Q generalizes both Nash-Q and Friend-and-Foe-Q: in general-sum games, the set of correlated equilibria contains the set of Nash equilibria; in constant-sum games, the set of correlated equilibria contains the set of minimax equilibria. This paper describes experiments with four variants of CE-Q, demonstrating empirical convergence to equilibrium policies on a testbed of general-sum Markov games. ICML Proceedings of the Twentieth International Conference on Machine Learning
Cite
Text
Greenwald and Hall. "Correlated Q-Learning." International Conference on Machine Learning, 2003.Markdown
[Greenwald and Hall. "Correlated Q-Learning." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/greenwald2003icml-correlated/)BibTeX
@inproceedings{greenwald2003icml-correlated,
title = {{Correlated Q-Learning}},
author = {Greenwald, Amy and Hall, Keith},
booktitle = {International Conference on Machine Learning},
year = {2003},
pages = {242-249},
url = {https://mlanthology.org/icml/2003/greenwald2003icml-correlated/}
}