Poisoning-Based Backdoor Attacks in Computer Vision

Abstract

Recent studies demonstrated that the training process of deep neural networks (DNNs) is vulnerable to backdoor attacks if third-party training resources (e.g., samples) are adopted. Specifically, the adversaries intend to embed hidden backdoors into DNNs, where the backdoor can be activated by pre-defined trigger patterns and leading malicious model predictions. My dissertation focuses on poisoning-based backdoor attacks in computer vision. Firstly, I study and propose more stealthy and effective attacks against image classification tasks in both physical and digital spaces. Secondly, I reveal the backdoor threats in visual object tracking, which is representative of critical video-related tasks. Thirdly, I explore how to exploit backdoor attacks as watermark techniques for positive purposes. I design a Python toolbox (i.e., BackdoorBox) that implements representative and advanced backdoor attacks and defenses under a unified and flexible framework, based on which to provide a comprehensive benchmark of existing methods at the end.

Cite

Text

Li. "Poisoning-Based Backdoor Attacks in Computer Vision." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26921

Markdown

[Li. "Poisoning-Based Backdoor Attacks in Computer Vision." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/li2023aaai-poisoning/) doi:10.1609/AAAI.V37I13.26921

BibTeX

@inproceedings{li2023aaai-poisoning,
  title     = {{Poisoning-Based Backdoor Attacks in Computer Vision}},
  author    = {Li, Yiming},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16121-16122},
  doi       = {10.1609/AAAI.V37I13.26921},
  url       = {https://mlanthology.org/aaai/2023/li2023aaai-poisoning/}
}