MFVFD: A Multi-Agent Q-Learning Approach to Cooperative and Non-Cooperative Tasks
Abstract
Value function decomposition (VFD) methods under the popular paradigm of centralized training and decentralized execution (CTDE) have promoted multi-agent reinforcement learning progress. However, existing VFD methods proceed from a group's value function decomposition to only solve cooperative tasks. With the individual value function decomposition, we propose MFVFD, a novel multi-agent Q-learning approach for solving cooperative and non-cooperative tasks based on mean-field theory. Our analysis on the Hawk-Dove and Nonmonotonic Cooperation matrix games evaluate MFVFD's convergent solution. Empirical studies on the challenging mixed cooperative-competitive tasks where hundreds of agents coexist demonstrate that MFVFD significantly outperforms existing baselines.
Cite
Text
Zhang et al. "MFVFD: A Multi-Agent Q-Learning Approach to Cooperative and Non-Cooperative Tasks." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/70Markdown
[Zhang et al. "MFVFD: A Multi-Agent Q-Learning Approach to Cooperative and Non-Cooperative Tasks." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/zhang2021ijcai-mfvfd/) doi:10.24963/IJCAI.2021/70BibTeX
@inproceedings{zhang2021ijcai-mfvfd,
title = {{MFVFD: A Multi-Agent Q-Learning Approach to Cooperative and Non-Cooperative Tasks}},
author = {Zhang, Tianhao and Ye, Qiwei and Bian, Jiang and Xie, Guangming and Liu, Tie-Yan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {500-506},
doi = {10.24963/IJCAI.2021/70},
url = {https://mlanthology.org/ijcai/2021/zhang2021ijcai-mfvfd/}
}