Multi-Agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents
Abstract
We consider a scenario where multiple agents are learning a common decision vector from data which can be influenced by the agents’ decisions. This leads to the problem of multi-agent performative prediction (Multi-PfD). In this paper, we formulate Multi-PfD as a decentralized optimization problem that minimizes a sum of loss functions, where each loss function is based on a distribution influenced by the local decision vector. We first prove the necessary and sufficient condition for the Multi-PfD problem to admit a unique multi-agent performative stable (Multi-PS) solution. We show that enforcing consensus leads to a laxer condition for existence of Multi-PS solution with respect to the distributions’ sensitivities, compared to the single agent case. Then, we study a decentralized extension to the greedy deployment scheme [Mendler-Dünner et al., 2020], called the DSGD-GD scheme. We show that DSGD-GD converges to the Multi-PS solution and analyze its non asymptotic convergence rate. Numerical results validate our analysis.
Cite
Text
Li et al. "Multi-Agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents." Neural Information Processing Systems, 2022.Markdown
[Li et al. "Multi-Agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/li2022neurips-multiagent/)BibTeX
@inproceedings{li2022neurips-multiagent,
title = {{Multi-Agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents}},
author = {Li, Qiang and Yau, Chung-Yiu and Wai, Hoi-To},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/li2022neurips-multiagent/}
}