Fast Computation of Nash Equilibria in Imperfect Information Games
Abstract
We introduce and analyze a class of algorithms, called Mirror Ascent against an Improved Opponent (MAIO), for computing Nash equilibria in two-player zero-sum games, both in normal form and in sequential form with imperfect information. These algorithms update the policy of each player with a mirror-ascent step to maximize the value of playing against an improved opponent. An improved opponent can be a best response, a greedy policy, a policy improved by policy gradient, or by any other reinforcement learning or search techniques. We establish a convergence result of the last iterate to the set of Nash equilibria and show that the speed of convergence depends on the amount of improvement offered by these improved policies. In addition, we show that under some condition, if we use a best response as improved policy, then an exponential convergence rate is achieved.
Cite
Text
Munos et al. "Fast Computation of Nash Equilibria in Imperfect Information Games." International Conference on Machine Learning, 2020.Markdown
[Munos et al. "Fast Computation of Nash Equilibria in Imperfect Information Games." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/munos2020icml-fast/)BibTeX
@inproceedings{munos2020icml-fast,
title = {{Fast Computation of Nash Equilibria in Imperfect Information Games}},
author = {Munos, Remi and Perolat, Julien and Lespiau, Jean-Baptiste and Rowland, Mark and De Vylder, Bart and Lanctot, Marc and Timbers, Finbarr and Hennes, Daniel and Omidshafiei, Shayegan and Gruslys, Audrunas and Azar, Mohammad Gheshlaghi and Lockhart, Edward and Tuyls, Karl},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {7119-7129},
volume = {119},
url = {https://mlanthology.org/icml/2020/munos2020icml-fast/}
}