Unsegment Anything by Simulating Deformation

Abstract

Foundation segmentation models while powerful pose a significant risk: they enable users to effortlessly extract any objects from any digital content with a single click potentially leading to copyright infringement or malicious misuse. To mitigate this risk we introduce a new task "Anything Unsegmentable" to grant any image "the right to be unsegmented". The ambitious pursuit of the task is to achieve highly transferable adversarial attack against all prompt-based segmentation models regardless of model parameterizations and prompts. We highlight the non-transferable and heterogeneous nature of prompt-specific adversarial noises. Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks. Intriguingly targeted feature attacks exhibit better transferability compared to untargeted ones suggesting the optimal update direction aligns with the image manifold. Based on the observations we design a novel attack named Unsegment Anything by Simulating Deformation (UAD). Our attack optimizes a differentiable deformation function to create a target deformed image which alters structural information while preserving achievable feature distance by adversarial example. Extensive experiments verify the effectiveness of our approach compromising a variety of promptable segmentation models with different architectures and prompt interfaces.

Cite

Text

Lu et al. "Unsegment Anything by Simulating Deformation." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02293

Markdown

[Lu et al. "Unsegment Anything by Simulating Deformation." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/lu2024cvpr-unsegment/) doi:10.1109/CVPR52733.2024.02293

BibTeX

@inproceedings{lu2024cvpr-unsegment,
  title     = {{Unsegment Anything by Simulating Deformation}},
  author    = {Lu, Jiahao and Yang, Xingyi and Wang, Xinchao},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {24294-24304},
  doi       = {10.1109/CVPR52733.2024.02293},
  url       = {https://mlanthology.org/cvpr/2024/lu2024cvpr-unsegment/}
}