Reinforcement Learning for Mean Field Games with Strategic Complementarities

Abstract

Mean Field Games (MFG) are the class of games with a very large number of agents and the standard equilibrium concept is a Mean Field Equilibrium (MFE). Algorithms for learning MFE in dynamic MFGs are unknown in general. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We also introduce a model-free and a model-based approach to learning T-MFE and provide sample complexities of both algorithms. We also develop a fully online learning scheme that obviates the need for a simulator. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications.

Cite

Text

Lee et al. "Reinforcement Learning for Mean Field Games with Strategic Complementarities." Artificial Intelligence and Statistics, 2021.

Markdown

[Lee et al. "Reinforcement Learning for Mean Field Games with Strategic Complementarities." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/lee2021aistats-reinforcement/)

BibTeX

@inproceedings{lee2021aistats-reinforcement,
  title     = {{Reinforcement Learning for Mean Field Games with Strategic Complementarities}},
  author    = {Lee, Kiyeob and Rengarajan, Desik and Kalathil, Dileep and Shakkottai, Srinivas},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {2458-2466},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/lee2021aistats-reinforcement/}
}