Symmetric Mean-Field Langevin Dynamics for Distributional Minimax Problems
Abstract
In this paper, we extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates. We propose \emph{mean-field Langevin averaged gradient} (MFL-AG), a single-loop algorithm that implements gradient descent ascent in the distribution spaces with a novel weighted averaging, and establish average-iterate convergence to the mixed Nash equilibrium. We also study both time and particle discretization regimes and prove a new uniform-in-time propagation of chaos result which accounts for the dependency of the particle interactions on all previous distributions. Furthermore, we propose \emph{mean-field Langevin anchored best response} (MFL-ABR), a symmetric double-loop algorithm based on best response dynamics with linear last-iterate convergence. Finally, we study applications to zero-sum Markov games and conduct simulations demonstrating long-term optimality.
Cite
Text
Kim et al. "Symmetric Mean-Field Langevin Dynamics for Distributional Minimax Problems." NeurIPS 2023 Workshops: M3L, 2023.Markdown
[Kim et al. "Symmetric Mean-Field Langevin Dynamics for Distributional Minimax Problems." NeurIPS 2023 Workshops: M3L, 2023.](https://mlanthology.org/neuripsw/2023/kim2023neuripsw-symmetric/)BibTeX
@inproceedings{kim2023neuripsw-symmetric,
title = {{Symmetric Mean-Field Langevin Dynamics for Distributional Minimax Problems}},
author = {Kim, Juno and Yamamoto, Kakei and Oko, Kazusato and Yang, Zhuoran and Suzuki, Taiji},
booktitle = {NeurIPS 2023 Workshops: M3L},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/kim2023neuripsw-symmetric/}
}