Summarizing Societies: Agent Abstraction in Multi-Agent Reinforcement Learning
Abstract
Agents cannot make sense of many-agent societies through direct consideration of small-scale, low-level agent identities, but instead must recognize emergent collective identities. Here, we take a first step towards a framework for recognizing this structure in large groups of low-level agents so that they can be modeled as a much smaller number of high-level agents—a process that we call agent abstraction. We illustrate this process by extending bisimulation metrics for state abstraction in reinforcement learning to the setting of multi-agent reinforcement learning and analyze a straightforward, if crude, abstraction based on experienced joint actions. It addresses non-stationarity due to other learning agents by improving minimax regret by a intuitive factor. To test if this compression factor provides signal for higher-level agency, we applied it to a large dataset of human play of the popular social dilemma game Diplomacy. We find that it correlates strongly with the degree of ground-truth abstraction of low-level units into the human players.
Cite
Text
Memarian et al. "Summarizing Societies: Agent Abstraction in Multi-Agent Reinforcement Learning." ICLR 2022 Workshops: Cells2Societies, 2022.Markdown
[Memarian et al. "Summarizing Societies: Agent Abstraction in Multi-Agent Reinforcement Learning." ICLR 2022 Workshops: Cells2Societies, 2022.](https://mlanthology.org/iclrw/2022/memarian2022iclrw-summarizing/)BibTeX
@inproceedings{memarian2022iclrw-summarizing,
title = {{Summarizing Societies: Agent Abstraction in Multi-Agent Reinforcement Learning}},
author = {Memarian, Amin and Touzel, Maximilian Puelma and Riemer, Matthew and Bhati, Rupali and Rish, Irina},
booktitle = {ICLR 2022 Workshops: Cells2Societies},
year = {2022},
url = {https://mlanthology.org/iclrw/2022/memarian2022iclrw-summarizing/}
}