Single-Agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning
Abstract
We investigate the robustness of multi-agent learning in strongly monotone games with bandit feedback. While previous research has developed learning algorithms that achieve last-iterate convergence to the unique Nash equilibrium (NE) at a polynomial rate, we demonstrate that all such algorithms are vulnerable to adversaries capable of poisoning even a single agent's utility observations. Specifically, we propose an attacking strategy such that for any given time horizon $T$, the adversary can mislead any multi-agent learning algorithm to converge to a point other than the unique NE with a corruption budget that grows sublinearly in $T$. To further understand the inherent robustness of these algorithms, we characterize the fundamental trade-off between convergence speed and the maximum tolerable total utility corruptions for two example algorithms, including the state-of-the-art one. Our theoretical and empirical results reveal an intrinsic efficiency-robustness trade-off: the faster an algorithm converges, the more vulnerable it becomes to utility poisoning attacks. To the best of our knowledge, this is the first work to identify and characterize such a trade-off in the context of multi-agent learning.
Cite
Text
Yao et al. "Single-Agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning." International Conference on Learning Representations, 2025.Markdown
[Yao et al. "Single-Agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yao2025iclr-singleagent/)BibTeX
@inproceedings{yao2025iclr-singleagent,
title = {{Single-Agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning}},
author = {Yao, Fan and Cheng, Yuwei and Wei, Ermin and Xu, Haifeng},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/yao2025iclr-singleagent/}
}