Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?
Abstract
Indiscriminate data poisoning can decrease the clean test accuracy of a deep learning model by slightly perturbing its training samples. There is a consensus that such poisons can hardly harm adversarially-trained (AT) models when the adversarial training budget is no less than the poison budget, i.e., $\epsilon_\mathrm{adv}\geq\epsilon_\mathrm{poi}$. This consensus, however, is challenged in this paper based on our new attack strategy that induces \textit{entangled features} (EntF). The existence of entangled features makes the poisoned data become less useful for training a model, no matter if AT is applied or not. We demonstrate that for attacking a CIFAR-10 AT model under a reasonable setting with $\epsilon_\mathrm{adv}=\epsilon_\mathrm{poi}=8/255$, our EntF yields an accuracy drop of $13.31\%$, which is $7\times$ better than existing methods and equal to discarding $83\%$ training data. We further show the generalizability of EntF to more challenging settings, e.g., higher AT budgets, partial poisoning, unseen model architectures, and stronger (ensemble or adaptive) defenses. We finally provide new insights into the distinct roles of non-robust vs. robust features in poisoning standard vs. AT models and demonstrate the possibility of using a hybrid attack to poison standard and AT models simultaneously. Our code is available at~\url{https://github.com/WenRuiUSTC/EntF}.
Cite
Text
Wen et al. "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?." International Conference on Learning Representations, 2023.Markdown
[Wen et al. "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/wen2023iclr-adversarial/)BibTeX
@inproceedings{wen2023iclr-adversarial,
title = {{Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?}},
author = {Wen, Rui and Zhao, Zhengyu and Liu, Zhuoran and Backes, Michael and Wang, Tianhao and Zhang, Yang},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/wen2023iclr-adversarial/}
}