DiffCLIP: Differential Attention Meets CLIP

Abstract

We propose DiffCLIP, a novel vision-language model that extends the differential attention mechanism to CLIP architectures. Differential attention was originally developed for large language models to amplify relevant context while canceling out noisy information. In this work, we integrate this mechanism into CLIP's dual encoder (image and text) framework. With minimal additional parameters, DiffCLIP achieves superior performance on image-text understanding tasks. Across zero-shot classification, retrieval, and robustness benchmarks, DiffCLIP consistently outperforms baseline CLIP models. Notably, these gains come with negligible computational overhead, demonstrating that differential attention can significantly enhance multi-modal representations without sacrificing efficiency.

Cite

Text

Hammoud and Ghanem. "DiffCLIP: Differential Attention Meets CLIP." Transactions on Machine Learning Research, 2025.

Markdown

[Hammoud and Ghanem. "DiffCLIP: Differential Attention Meets CLIP." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/hammoud2025tmlr-diffclip/)

BibTeX

@article{hammoud2025tmlr-diffclip,
  title     = {{DiffCLIP: Differential Attention Meets CLIP}},
  author    = {Hammoud, Hasan Abed Al Kader and Ghanem, Bernard},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/hammoud2025tmlr-diffclip/}
}