UNICORN: A Unified Backdoor Trigger Inversion Framework

Abstract

The backdoor attack, where the adversary uses inputs stamped with triggers (e.g., a patch) to activate pre-planted malicious behaviors, is a severe threat to Deep Neural Network (DNN) models. Trigger inversion is an effective way of identifying backdoor models and understanding embedded adversarial behaviors. A challenge of trigger inversion is that there are many ways of constructing the trigger. Existing methods cannot generalize to various types of triggers by making certain assumptions or attack-specific constraints. The fundamental reason is that existing work does not formally define the trigger and the inversion problem. This work formally defines and analyzes the trigger and the inversion problem. Then, it proposes a unified framework to invert backdoor triggers based on the formalization of triggers and the identified inner behaviors of backdoor models from our analysis. Our prototype UNICORN is general and effective in inverting backdoor triggers in DNNs. The code can be found at https://github.com/RU-System-Software-and-Security/UNICORN.

Cite

Text

Wang et al. "UNICORN: A Unified Backdoor Trigger Inversion Framework." International Conference on Learning Representations, 2023.

Markdown

[Wang et al. "UNICORN: A Unified Backdoor Trigger Inversion Framework." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/wang2023iclr-unicorn/)

BibTeX

@inproceedings{wang2023iclr-unicorn,
  title     = {{UNICORN: A Unified Backdoor Trigger Inversion Framework}},
  author    = {Wang, Zhenting and Mei, Kai and Zhai, Juan and Ma, Shiqing},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/wang2023iclr-unicorn/}
}