Towards Convergence to Nash Equilibria in Two-Team Zero-Sum Games

Abstract

Contemporary applications of machine learning raise important and overlooked theoretical questions regarding optimization in two-team games. Formally, two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents, each experiencing a utility identical to that of their teammates and opposite to that of the opposing team. We focus on the solution concept of Nash equilibria and prove $\textrm{CLS}$-hardness of computing them in this class of games. To further examine the capabilities of online learning algorithms in games with full-information feedback, we propose a benchmark of a simple ---yet nontrivial--- family of such games. These games do not enjoy the properties used to prove convergence for relevant algorithms. In particular, we use a dynamical systems perspective to demonstrate that gradient descent-ascent, its optimistic variant, optimistic multiplicative weights update, and extra gradient fail to converge (even locally) to a Nash equilibrium. On a brighter note, we propose a first-order method that leverages control theory techniques and under some conditions enjoys last-iterate local convergence to a Nash equilibrium. We also believe our proposed method is of independent interest for general min-max optimization.

Cite

Text

Kalogiannis et al. "Towards Convergence to Nash Equilibria in Two-Team Zero-Sum Games." International Conference on Learning Representations, 2023.

Markdown

[Kalogiannis et al. "Towards Convergence to Nash Equilibria in Two-Team Zero-Sum Games." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/kalogiannis2023iclr-convergence/)

BibTeX

@inproceedings{kalogiannis2023iclr-convergence,
  title     = {{Towards Convergence to Nash Equilibria in Two-Team Zero-Sum Games}},
  author    = {Kalogiannis, Fivos and Panageas, Ioannis and Vlatakis-Gkaragkounis, Emmanouil-Vasileios},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/kalogiannis2023iclr-convergence/}
}