IBD-PSC: Input-Level Backdoor Detection via Parameter-Oriented Scaling Consistency
Abstract
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training. This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) as a ‘firewall’ to filter out malicious testing images. Our method is motivated by an intriguing phenomenon, i.e., parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when amplifying model parameters. In particular, we provide theoretical analysis to safeguard the foundations of the PSC phenomenon. We also design an adaptive method to select BN layers to scale up for effective detection. Extensive experiments are conducted on benchmark datasets, verifying the effectiveness and efficiency of our IBD-PSC method and its resistance to adaptive attacks. Codes are available at https://github.com/THUYimingLi/BackdoorBox.
Cite
Text
Hou et al. "IBD-PSC: Input-Level Backdoor Detection via Parameter-Oriented Scaling Consistency." International Conference on Machine Learning, 2024.Markdown
[Hou et al. "IBD-PSC: Input-Level Backdoor Detection via Parameter-Oriented Scaling Consistency." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/hou2024icml-ibdpsc/)BibTeX
@inproceedings{hou2024icml-ibdpsc,
title = {{IBD-PSC: Input-Level Backdoor Detection via Parameter-Oriented Scaling Consistency}},
author = {Hou, Linshan and Feng, Ruili and Hua, Zhongyun and Luo, Wei and Zhang, Leo Yu and Li, Yiming},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {18992-19022},
volume = {235},
url = {https://mlanthology.org/icml/2024/hou2024icml-ibdpsc/}
}