Novelty-Guided Data Reuse for Efficient and Diversified Multi-Agent Reinforcement Learning
Abstract
Recently, deep Multi-Agent Reinforcement Learning (MARL) has demonstrated its potential to tackle complex cooperative tasks, pushing the boundaries of AI in collaborative environments. However, the efficiency of these systems is often compromised by inadequate sample utilization and a lack of diversity in learning strategies. To enhance MARL performance, we introduce a novel sample reuse approach that dynamically adjusts policy updates based on observation novelty. Specifically, we employ a Random Network Distillation (RND) network to gauge the novelty of each agent's current state, assigning additional sample update opportunities based on the uniqueness of the data. We name our method Multi-Agent Novelty-GuidEd sample Reuse (MANGER). This method increases sample efficiency while promoting exploration and diverse agent behaviors. Our evaluations confirm substantial improvements in MARL effectiveness in complex cooperative scenarios such as Google Research Football and super-hard StarCraft II micromanagement tasks.
Cite
Text
Chen et al. "Novelty-Guided Data Reuse for Efficient and Diversified Multi-Agent Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I15.33749Markdown
[Chen et al. "Novelty-Guided Data Reuse for Efficient and Diversified Multi-Agent Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/chen2025aaai-novelty/) doi:10.1609/AAAI.V39I15.33749BibTeX
@inproceedings{chen2025aaai-novelty,
title = {{Novelty-Guided Data Reuse for Efficient and Diversified Multi-Agent Reinforcement Learning}},
author = {Chen, Yangkun and Yang, Kai and Tao, Jian and Lyu, Jiafei},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {15930-15938},
doi = {10.1609/AAAI.V39I15.33749},
url = {https://mlanthology.org/aaai/2025/chen2025aaai-novelty/}
}