Convergence Rates for Localized Actor-Critic in Networked Markov Potential Games

Abstract

We introduce a class of networked Markov potential games where agents are associated with nodes in a network. Each agent has its own local potential function, and the reward of each agent depends only on the states and actions of agents within a neighborhood. In this context, we propose a localized actor-critic algorithm. The algorithm is scalable since each agent uses only local information and does not need access to the global state. Further, the algorithm overcomes the curse of dimensionality through the use of function approximation. Our main results provide finite-sample guarantees up to a localization error and a function approximation error. Specifically, we achieve an $\tilde{\mathcal{O}}(\tilde{\epsilon}^{-4})$ sample complexity measured by the averaged Nash regret. This is the first finite-sample bound for multi-agent competitive games that does not depend on the number of agents.

Cite

Text

Zhou et al. "Convergence Rates for Localized Actor-Critic in Networked Markov Potential Games." Uncertainty in Artificial Intelligence, 2023.

Markdown

[Zhou et al. "Convergence Rates for Localized Actor-Critic in Networked Markov Potential Games." Uncertainty in Artificial Intelligence, 2023.](https://mlanthology.org/uai/2023/zhou2023uai-convergence/)

BibTeX

@inproceedings{zhou2023uai-convergence,
  title     = {{Convergence Rates for Localized Actor-Critic in Networked Markov Potential Games}},
  author    = {Zhou, Zhaoyi and Chen, Zaiwei and Lin, Yiheng and Wierman, Adam},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2023},
  pages     = {2563-2573},
  volume    = {216},
  url       = {https://mlanthology.org/uai/2023/zhou2023uai-convergence/}
}