Mitigate the Gap: Improving Cross-Modal Alignment in CLIP

Abstract

Contrastive Language--Image Pre-training (CLIP) has manifested remarkable improvements in zero-shot classification and cross-modal vision-language tasks. Yet, from a geometrical point of view, the CLIP embedding space has been found to have a pronounced modality gap. This gap renders the embedding space overly sparse and disconnected, with different modalities being densely distributed in distinct subregions of the hypersphere. In this work, we propose AlignCLIP, in order to improve the alignment between text and image embeddings, and thereby reduce the modality gap. AlignCLIP increases the cross-modal alignment, and yields gains across several zero-shot and fine-tuning downstream evaluations by sharing the learnable parameters between the modality encoders and a semantically-regularized separation objective function on the uni-modal embeddings. The source code and model checkpoints for reproducing our experiments are available at https://github.com/sarahESL/AlignCLIP.

Cite

Text

Eslami and de Melo. "Mitigate the Gap: Improving Cross-Modal Alignment in CLIP." International Conference on Learning Representations, 2025.

Markdown

[Eslami and de Melo. "Mitigate the Gap: Improving Cross-Modal Alignment in CLIP." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/eslami2025iclr-mitigate/)

BibTeX

@inproceedings{eslami2025iclr-mitigate,
  title     = {{Mitigate the Gap: Improving Cross-Modal Alignment in CLIP}},
  author    = {Eslami, Sedigheh and de Melo, Gerard},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/eslami2025iclr-mitigate/}
}