BenchMARL: Benchmarking Multi-Agent Reinforcement Learning

Abstract

The field of Multi-Agent Reinforcement Learning (MARL) is currently facing a reproducibility crisis. While solutions for standardized reporting have been proposed to address the issue, we still lack a benchmarking tool that enables standardization and reproducibility, while leveraging cutting-edge Reinforcement Learning (RL) implementations. In this paper, we introduce BenchMARL, the first MARL training library created to enable standardized benchmarking across different algorithms, models, and environments. BenchMARL uses TorchRL as its backend, granting it high-performance and maintained state-of-the-art implementations while addressing the broad community of MARL PyTorch users. Its design enables systematic configuration and reporting, thus allowing users to create and run complex benchmarks from simple one-line inputs. BenchMARL is open-sourced on GitHub at https://github.com/facebookresearch/BenchMARL

Cite

Text

Bettini et al. "BenchMARL: Benchmarking Multi-Agent Reinforcement Learning." Machine Learning Open Source Software, 2024.

Markdown

[Bettini et al. "BenchMARL: Benchmarking Multi-Agent Reinforcement Learning." Machine Learning Open Source Software, 2024.](https://mlanthology.org/mloss/2024/bettini2024jmlr-benchmarl/)

BibTeX

@article{bettini2024jmlr-benchmarl,
  title     = {{BenchMARL: Benchmarking Multi-Agent Reinforcement Learning}},
  author    = {Bettini, Matteo and Prorok, Amanda and Moens, Vincent},
  journal   = {Machine Learning Open Source Software},
  year      = {2024},
  pages     = {1-10},
  volume    = {25},
  url       = {https://mlanthology.org/mloss/2024/bettini2024jmlr-benchmarl/}
}