MisoDICE: Multi-Agent Imitation from Mixed-Quality Demonstrations

Abstract

We study offline imitation learning (IL) in cooperative multi-agent settings, where demonstrations have unlabeled mixed quality - containing both expert and suboptimal trajectories. Our proposed solution is structured in two stages: trajectory labeling and multi-agent imitation learning, designed jointly to enable effective learning from heterogeneous, unlabeled data. In the first stage, we combine advances in large language models and preference-based reinforcement learning to construct a progressive labeling pipeline that distinguishes expert-quality trajectories. In the second stage, we introduce MisoDICE, a novel multi-agent IL algorithm that leverages these labels to learn robust policies while addressing the computational complexity of large joint state-action spaces. By extending the popular single-agent DICE framework to multi-agent settings with a new value decomposition and mixing architecture, our method yields a convex policy optimization objective and ensures consistency between global and local policies. We evaluate MisoDICE on multiple standard multi-agent RL benchmarks and demonstrate superior performance, especially when expert data is scarce.

Cite

Text

Bui et al. "MisoDICE: Multi-Agent Imitation from Mixed-Quality Demonstrations." Advances in Neural Information Processing Systems, 2025.

Markdown

[Bui et al. "MisoDICE: Multi-Agent Imitation from Mixed-Quality Demonstrations." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/bui2025neurips-misodice/)

BibTeX

@inproceedings{bui2025neurips-misodice,
  title     = {{MisoDICE: Multi-Agent Imitation from Mixed-Quality Demonstrations}},
  author    = {Bui, The Viet and Mai, Tien Anh and Nguyen, Thanh Hong},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/bui2025neurips-misodice/}
}