BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract)

Abstract

Image segmentation is foundational to computer vision applications, and the Segment Anything Model (SAM) has become a leading base model for these tasks. However, SAM falters in specialized downstream challenges, leading to various customized SAM models. We introduce BadSAM, a backdoor attack tailored for SAM, revealing that customized models can harbor malicious behaviors. Using the CAMO dataset, we confirm BadSAM's efficacy and identify SAM vulnerabilities. This study paves the way for the development of more secure and customizable vision foundation models.

Cite

Text

Guan et al. "BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30448

Markdown

[Guan et al. "BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/guan2024aaai-badsam/) doi:10.1609/AAAI.V38I21.30448

BibTeX

@inproceedings{guan2024aaai-badsam,
  title     = {{BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract)}},
  author    = {Guan, Zihan and Hu, Mengxuan and Zhou, Zhongliang and Zhang, Jielu and Li, Sheng and Liu, Ninghao},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23506-23507},
  doi       = {10.1609/AAAI.V38I21.30448},
  url       = {https://mlanthology.org/aaai/2024/guan2024aaai-badsam/}
}