Extra-Gradient with Player Sampling for Faster Convergence in N-Player Games
Abstract
Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e.g. when training GANs. In this paper, we analyse a new extra-gradient method for Nash equilibrium finding, that performs gradient extrapolations and updates on a random subset of players at each iteration. This approach provably exhibits a better rate of convergence than full extra-gradient for non-smooth convex games with noisy gradient oracle. We propose an additional variance reduction mechanism to obtain speed-ups in smooth convex games. Our approach makes extrapolation amenable to massive multiplayer settings, and brings empirical speed-ups, in particular when using a heuristic cyclic sampling scheme. Most importantly, it allows to train faster and better GANs and mixtures of GANs.
Cite
Text
Jelassi et al. "Extra-Gradient with Player Sampling for Faster Convergence in N-Player Games." International Conference on Machine Learning, 2020.Markdown
[Jelassi et al. "Extra-Gradient with Player Sampling for Faster Convergence in N-Player Games." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/jelassi2020icml-extragradient/)BibTeX
@inproceedings{jelassi2020icml-extragradient,
title = {{Extra-Gradient with Player Sampling for Faster Convergence in N-Player Games}},
author = {Jelassi, Samy and Domingo-Enrich, Carles and Scieur, Damien and Mensch, Arthur and Bruna, Joan},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {4736-4745},
volume = {119},
url = {https://mlanthology.org/icml/2020/jelassi2020icml-extragradient/}
}