Multi-Agent Training Beyond Zero-Sum with Correlated Equilibrium Meta-Solvers
Abstract
Two-player, constant-sum games are well studied in the literature, but there has been limited progress outside of this setting. We propose Joint Policy-Space Response Oracles (JPSRO), an algorithm for training agents in n-player, general-sum extensive form games, which provably converges to an equilibrium. We further suggest correlated equilibria (CE) as promising meta-solvers, and propose a novel solution concept Maximum Gini Correlated Equilibrium (MGCE), a principled and computationally efficient family of solutions for solving the correlated equilibrium selection problem. We conduct several experiments using CE meta-solvers for JPSRO and demonstrate convergence on n-player, general-sum games.
Cite
Text
Marris et al. "Multi-Agent Training Beyond Zero-Sum with Correlated Equilibrium Meta-Solvers." International Conference on Machine Learning, 2021.Markdown
[Marris et al. "Multi-Agent Training Beyond Zero-Sum with Correlated Equilibrium Meta-Solvers." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/marris2021icml-multiagent/)BibTeX
@inproceedings{marris2021icml-multiagent,
title = {{Multi-Agent Training Beyond Zero-Sum with Correlated Equilibrium Meta-Solvers}},
author = {Marris, Luke and Muller, Paul and Lanctot, Marc and Tuyls, Karl and Graepel, Thore},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {7480-7491},
volume = {139},
url = {https://mlanthology.org/icml/2021/marris2021icml-multiagent/}
}