Universal Backdoor Attacks
Abstract
Web-scraped datasets are vulnerable to data poisoning, which can be used for backdooring deep image classifiers during training. Since training on large datasets is expensive, a model is trained once and reused many times. Unlike adversarial examples, backdoor attacks often target specific classes rather than any class learned by the model. One might expect that targeting many classes through a naïve composition of attacks vastly increases the number of poison samples. We show this is not necessarily true and more efficient, _universal_ data poisoning attacks exist that allow controlling misclassifications from any source class into any target class with a slight increase in poison samples. Our idea is to generate triggers with salient characteristics that the model can learn. The triggers we craft exploit a phenomenon we call _inter-class poison transferability_, where learning a trigger from one class makes the model more vulnerable to learning triggers for other classes. We demonstrate the effectiveness and robustness of our universal backdoor attacks by controlling models with up to 6,000 classes while poisoning only 0.15% of the training dataset.
Cite
Text
Schneider et al. "Universal Backdoor Attacks." International Conference on Learning Representations, 2024.Markdown
[Schneider et al. "Universal Backdoor Attacks." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/schneider2024iclr-universal/)BibTeX
@inproceedings{schneider2024iclr-universal,
title = {{Universal Backdoor Attacks}},
author = {Schneider, Benjamin and Lukas, Nils and Kerschbaum, Florian},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/schneider2024iclr-universal/}
}