Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity
Abstract
We consider the problem of learning sparse polymatrix games from observations of strategic interactions. We show that a polynomial time method based on $\ell_{1,2}$-group regularized logistic regression recovers a game, whose Nash equilibria are the $ε$-Nash equilibria of the game from which the data was generated (true game), in $O(m^4 d^4 \log (pd))$ samples of strategy profiles — where $m$ is the maximum number of pure strategies of a player, $p$ is the number of players, and $d$ is the maximum degree of the game graph. Under slightly more stringent separability conditions on the payoff matrices of the true game, we show that our method learns a game with the exact same Nash equilibria as the true game. We also show that $Ω(d \log (pm))$ samples are necessary for any method to consistently recover a game, with the same Nash-equilibria as the true game, from observations of strategic interactions.
Cite
Text
Ghoshal and Honorio. "Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity." International Conference on Artificial Intelligence and Statistics, 2018.Markdown
[Ghoshal and Honorio. "Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/ghoshal2018aistats-learning-a/)BibTeX
@inproceedings{ghoshal2018aistats-learning-a,
title = {{Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity}},
author = {Ghoshal, Asish and Honorio, Jean},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2018},
pages = {1486-1494},
url = {https://mlanthology.org/aistats/2018/ghoshal2018aistats-learning-a/}
}