Coordinated Multi-Agent Imitation Learning

Abstract

We study the problem of imitation learning from demonstrations of multiple coordinating agents. One key challenge in this setting is that learning a good model of coordination can be difficult, since coordination is often implicit in the demonstrations and must be inferred as a latent variable. We propose a joint approach that simultaneously learns a latent coordination model along with the individual policies. In particular, our method integrates unsupervised structure learning with conventional imitation learning. We illustrate the power of our approach on a difficult problem of learning multiple policies for fine-grained behavior modeling in team sports, where different players occupy different roles in the coordinated team strategy. We show that having a coordination model to infer the roles of players yields substantially improved imitation loss compared to conventional baselines.

Cite

Text

Le et al. "Coordinated Multi-Agent Imitation Learning." International Conference on Machine Learning, 2017.

Markdown

[Le et al. "Coordinated Multi-Agent Imitation Learning." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/le2017icml-coordinated/)

BibTeX

@inproceedings{le2017icml-coordinated,
  title     = {{Coordinated Multi-Agent Imitation Learning}},
  author    = {Le, Hoang M. and Yue, Yisong and Carr, Peter and Lucey, Patrick},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {1995-2003},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/le2017icml-coordinated/}
}