Leader-Based Decision Learning for Cooperative Multi-Agent Reinforcement Learning

Abstract

A leader in the team enables efficient learning for other novices in the social learning setting for both humans and animals. This paper constructs the leader-based decision learning framework for Multi-Agent Reinforcement Learning and investigates whether the leader enables the learning of novices as well. We compare three different approaches to distilling a leader's experiences: Linear Layer Dimension Reduction, Attentive Graph Pooling, and Attention-based Graph Neural Network. We successfully show that a leader-based decision learning can 1) enable agents to learn faster, cooperate more effectively, and escape local optimum, and 2) promote the generalizability of agents in more challenging and unseen environments. The key to effective distillation is to maintain and aggregate important information.

Cite

Text

Chen et al. "Leader-Based Decision Learning for Cooperative Multi-Agent Reinforcement Learning." ICML 2022 Workshops: DARL, 2022.

Markdown

[Chen et al. "Leader-Based Decision Learning for Cooperative Multi-Agent Reinforcement Learning." ICML 2022 Workshops: DARL, 2022.](https://mlanthology.org/icmlw/2022/chen2022icmlw-leaderbased/)

BibTeX

@inproceedings{chen2022icmlw-leaderbased,
  title     = {{Leader-Based Decision Learning for Cooperative Multi-Agent Reinforcement Learning}},
  author    = {Chen, Wenqi and Zeng, Xin and Li, Amber},
  booktitle = {ICML 2022 Workshops: DARL},
  year      = {2022},
  url       = {https://mlanthology.org/icmlw/2022/chen2022icmlw-leaderbased/}
}