Scalable Natural Policy Gradient for General-Sum Linear Quadratic Games with Known Parameters

Abstract

Consider a general-sum N-player linear-quadratic (LQ) game with stochastic dynamics over a finite time horizon. It is known that under some mild assumptions, the Nash equilibrium (NE) strategies for the players can be obtained by a natural policy gradient algorithm. However, the traditional implementation of the algorithm requires the availability of complete state and action information from all agents and may not scale well with the number of agents. Under the assumption of known problem parameters, we present an algorithm that assumes state and action information from only neighboring agents according to the graph describing the dynamic or cost coupling among the agents. We show that the proposed algorithm converges to an $\epsilon$-neighborhood of the NE where the value of $\epsilon$ depends on the size of the local neighborhood of agents.

Cite

Text

Shibl et al. "Scalable Natural Policy Gradient for General-Sum Linear Quadratic Games with Known Parameters." Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, 2025.

Markdown

[Shibl et al. "Scalable Natural Policy Gradient for General-Sum Linear Quadratic Games with Known Parameters." Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, 2025.](https://mlanthology.org/l4dc/2025/shibl2025l4dc-scalable/)

BibTeX

@inproceedings{shibl2025l4dc-scalable,
  title     = {{Scalable Natural Policy Gradient for General-Sum Linear Quadratic Games with Known Parameters}},
  author    = {Shibl, Mostafa and Suttle, Wesley and Gupta, Vijay},
  booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference},
  year      = {2025},
  pages     = {139-152},
  volume    = {283},
  url       = {https://mlanthology.org/l4dc/2025/shibl2025l4dc-scalable/}
}