Few-Shot Backdoor Attacks on Visual Object Tracking
Abstract
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems. In current practice, third-party resources such as datasets, backbone networks, and training platforms are frequently used to train high-performance VOT models. Whilst these resources bring certain convenience, they also introduce new security threats into VOT models. In this paper, we reveal such a threat where an adversary can easily implant hidden backdoors into VOT models by tempering with the training process. Specifically, we propose a simple yet effective few-shot backdoor attack (FSBA) that optimizes two losses alternately: 1) a \emph{feature loss} defined in the hidden feature space, and 2) the standard \emph{tracking loss}. We show that, once the backdoor is embedded into the target model by our FSBA, it can trick the model to lose track of specific objects even when the \emph{trigger} only appears in one or a few frames. We examine our attack in both digital and physical-world settings and show that it can significantly degrade the performance of state-of-the-art VOT trackers. We also show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
Cite
Text
Li et al. "Few-Shot Backdoor Attacks on Visual Object Tracking." International Conference on Learning Representations, 2022.Markdown
[Li et al. "Few-Shot Backdoor Attacks on Visual Object Tracking." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/li2022iclr-fewshot/)BibTeX
@inproceedings{li2022iclr-fewshot,
title = {{Few-Shot Backdoor Attacks on Visual Object Tracking}},
author = {Li, Yiming and Zhong, Haoxiang and Ma, Xingjun and Jiang, Yong and Xia, Shu-Tao},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/li2022iclr-fewshot/}
}