Understanding the Vulnerability of CLIP to Image Compression

Abstract

CLIP is a widely used foundational vision-language model that is used for zero-shot image recognition and other image-text alignment tasks. We demonstrate that CLIP is vulnerable to change in image quality under compression. This surprising result is further analysed using an attribution method-Integrated Gradients. Using this attribution method, we are able to better understand both quantitatively and qualitatively exactly the nature in which the compression affects the zero-shot recognition accuracy of this model. We evaluate this extensively on CIFAR-10 and STL-10. Our work provides the basis to understand this vulnerability of CLIP and can help us develop more effective methods to improve the robustness of CLIP and other vision-language models.

Cite

Text

Chen et al. "Understanding the Vulnerability of CLIP to Image Compression." NeurIPS 2023 Workshops: R0-FoMo, 2023.

Markdown

[Chen et al. "Understanding the Vulnerability of CLIP to Image Compression." NeurIPS 2023 Workshops: R0-FoMo, 2023.](https://mlanthology.org/neuripsw/2023/chen2023neuripsw-understanding-a/)

BibTeX

@inproceedings{chen2023neuripsw-understanding-a,
  title     = {{Understanding the Vulnerability of CLIP to Image Compression}},
  author    = {Chen, Cangxiong and Namboodiri, Vinay P. and Padget, Julian},
  booktitle = {NeurIPS 2023 Workshops: R0-FoMo},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/chen2023neuripsw-understanding-a/}
}