Deconfounded Value Decomposition for Multi-Agent Reinforcement Learning
Abstract
Value decomposition (VD) methods have been widely used in cooperative multi-agent reinforcement learning (MARL), where credit assignment plays an important role in guiding the agents’ decentralized execution. In this paper, we investigate VD from a novel perspective of causal inference. We first show that the environment in existing VD methods is an unobserved confounder as the common cause factor of the global state and the joint value function, which leads to the confounding bias on learning credit assignment. We then present our approach, deconfounded value decomposition (DVD), which cuts off the backdoor confounding path from the global state to the joint value function. The cut is implemented by introducing the trajectory graph, which depends only on the local trajectories, as a proxy confounder. DVD is general enough to be applied to various VD methods, and extensive experiments show that DVD can consistently achieve significant performance gains over different state-of-the-art VD methods on StarCraft II and MACO benchmarks.
Cite
Text
Li et al. "Deconfounded Value Decomposition for Multi-Agent Reinforcement Learning." International Conference on Machine Learning, 2022.Markdown
[Li et al. "Deconfounded Value Decomposition for Multi-Agent Reinforcement Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/li2022icml-deconfounded/)BibTeX
@inproceedings{li2022icml-deconfounded,
title = {{Deconfounded Value Decomposition for Multi-Agent Reinforcement Learning}},
author = {Li, Jiahui and Kuang, Kun and Wang, Baoxiang and Liu, Furui and Chen, Long and Fan, Changjie and Wu, Fei and Xiao, Jun},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {12843-12856},
volume = {162},
url = {https://mlanthology.org/icml/2022/li2022icml-deconfounded/}
}