Emergent Coordination Through Competition

Abstract

We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines.

Cite

Text

Liu et al. "Emergent Coordination Through Competition." International Conference on Learning Representations, 2019.

Markdown

[Liu et al. "Emergent Coordination Through Competition." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/liu2019iclr-emergent/)

BibTeX

@inproceedings{liu2019iclr-emergent,
  title     = {{Emergent Coordination Through Competition}},
  author    = {Liu, Siqi and Lever, Guy and Merel, Josh and Tunyasuvunakool, Saran and Heess, Nicolas and Graepel, Thore},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/liu2019iclr-emergent/}
}