Mind the Gap: Understanding the Modality Gap in Multi-Modal Contrastive Representation Learning
Abstract
We present modality gap, an intriguing geometric phenomenon of the representation space of multi-modal models. Specifically, we show that different data modalities (e.g. images and text) are embedded at arm's length in their shared representation in multi-modal models such as CLIP. Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization. In model initialization, we show empirically and theoretically that the representation of a common deep neural network is restricted to a narrow cone. As a consequence, in a multi-modal model with two encoders, the representations of the two modalities are clearly apart when the model is initialized. During optimization, contrastive learning keeps the different modalities separate by a certain distance, which is influenced by the temperature parameter in the loss function. Our experiments further demonstrate that varying the modality gap distance has a significant impact in improving the model's downstream zero-shot classification performance and fairness. Our code and data are available at https://modalitygap.readthedocs.io/
Cite
Text
Liang et al. "Mind the Gap: Understanding the Modality Gap in Multi-Modal Contrastive Representation Learning." ICML 2022 Workshops: Pre-Training, 2022.Markdown
[Liang et al. "Mind the Gap: Understanding the Modality Gap in Multi-Modal Contrastive Representation Learning." ICML 2022 Workshops: Pre-Training, 2022.](https://mlanthology.org/icmlw/2022/liang2022icmlw-mind/)BibTeX
@inproceedings{liang2022icmlw-mind,
title = {{Mind the Gap: Understanding the Modality Gap in Multi-Modal Contrastive Representation Learning}},
author = {Liang, Weixin and Zhang, Yuhui and Kwon, Yongchan and Yeung, Serena and Zou, James},
booktitle = {ICML 2022 Workshops: Pre-Training},
year = {2022},
url = {https://mlanthology.org/icmlw/2022/liang2022icmlw-mind/}
}