NfgTransformer: Equivariant Representation Learning for Normal-Form Games
Abstract
Normal-form games (NFGs) are the fundamental model of *strategic interaction*. We study their representation using neural networks. We describe the inherent equivariance of NFGs --- any permutation of strategies describes an equivalent game --- as well as the challenges this poses for representation learning. We then propose the NfgTransformer architecture that leverages this equivariance, leading to state-of-the-art performance in a range of game-theoretic tasks including equilibrium-solving, deviation gain estimation and ranking, with a common approach to NFG representation. We show that the resulting model is interpretable and versatile, paving the way towards deep learning systems capable of game-theoretic reasoning when interacting with humans and with each other.
Cite
Text
Liu et al. "NfgTransformer: Equivariant Representation Learning for Normal-Form Games." International Conference on Learning Representations, 2024.Markdown
[Liu et al. "NfgTransformer: Equivariant Representation Learning for Normal-Form Games." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/liu2024iclr-nfgtransformer/)BibTeX
@inproceedings{liu2024iclr-nfgtransformer,
title = {{NfgTransformer: Equivariant Representation Learning for Normal-Form Games}},
author = {Liu, Siqi and Marris, Luke and Piliouras, Georgios and Gemp, Ian and Heess, Nicolas},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/liu2024iclr-nfgtransformer/}
}