Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning
Abstract
Federated Learning (FL), a privacy-preserving decentralized machine learning framework, has been shown to be vulnerable to backdoor attacks. Current research primarily focuses on the Single-Label Backdoor Attack (SBA), wherein adversaries share a consistent target. However, a critical fact is overlooked: adversaries may be non-cooperative, have distinct targets, and operate independently, which exhibits a more practical scenario called Multi-Label Backdoor Attack (MBA). Unfortunately, prior works are ineffective in the MBA scenario since non-cooperative attackers exclude each other. In this work, we conduct an in-depth investigation to uncover the inherent constraints of the exclusion: similar backdoor mappings are constructed for different targets, resulting in conflicts among backdoor functions. To address this limitation, we propose Mirage, the first non-cooperative MBA strategy in FL that allows attackers to inject effective and persistent backdoors into the global model without collusion by constructing in-distribution (ID) backdoor mapping. Specifically, we introduce an adversarial adaptation method to bridge the backdoor features and the target distribution in an ID manner. Additionally, we further leverage a constrained optimization method to ensure the ID mapping survives in the global training dynamics. Extensive evaluations demonstrate that Mirage outperforms various state-of-the-art attacks and bypasses existing defenses, achieving an average ASR greater than 97% and maintaining over 90% after 900 rounds. This work aims to alert researchers to this potential threat and inspire the design of effective defense mechanisms. Code has been made open-source.
Cite
Text
Li et al. "Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02400Markdown
[Li et al. "Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/li2025cvpr-infighting/) doi:10.1109/CVPR52734.2025.02400BibTeX
@inproceedings{li2025cvpr-infighting,
title = {{Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning}},
author = {Li, Ye and Zhao, Yanchao and Zhu, Chengcheng and Zhang, Jiale},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {25770-25779},
doi = {10.1109/CVPR52734.2025.02400},
url = {https://mlanthology.org/cvpr/2025/li2025cvpr-infighting/}
}