Exploiting Inter-Agent Coupling Information for Efficient Reinforcement Learning of Cooperative LQR

Abstract

Developing scalable and efficient reinforcement learning algorithms for cooperative multi-agent control has received significant attention over the past years. Existing literature has proposed inexact decompositions of local Q-functions based on empirical information structures between the agents. In this paper, we exploit inter-agent coupling information and propose a systematic approach to exactly decompose the local Q-function of each agent. We develop an approximate least square policy iteration algorithm based on the proposed decomposition and identify two architectures to learn the local Q-function for each agent. We establish that the worst-case sample complexity of the decomposition is equal to the centralized case and derive necessary and sufficient graphical conditions on the inter-agent couplings to achieve better sample efficiency. We demonstrate the improved sample efficiency and computational efficiency on numerical examples.

Cite

Text

Syed and Bai. "Exploiting Inter-Agent Coupling Information for Efficient Reinforcement Learning of Cooperative LQR." Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, 2025.

Markdown

[Syed and Bai. "Exploiting Inter-Agent Coupling Information for Efficient Reinforcement Learning of Cooperative LQR." Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, 2025.](https://mlanthology.org/l4dc/2025/syed2025l4dc-exploiting/)

BibTeX

@inproceedings{syed2025l4dc-exploiting,
  title     = {{Exploiting Inter-Agent Coupling Information for Efficient Reinforcement Learning of Cooperative LQR}},
  author    = {Syed, Shahbaz P Qadri and Bai, He},
  booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference},
  year      = {2025},
  pages     = {1378-1391},
  volume    = {283},
  url       = {https://mlanthology.org/l4dc/2025/syed2025l4dc-exploiting/}
}