MASS: Overcoming Language Bias in Image-Text Matching

Abstract

Pretrained visual-language models have made significant advancements in multimodal tasks, including image-text retrieval. However, a major challenge in image-text matching lies in language bias, where models predominantly rely on language priors and neglect to adequately consider the visual content. We thus present Multimodal ASsociation Score (MASS), a framework that reduces the reliance on language priors for better visual accuracy in image-text matching problems. It can be seamlessly incorporated into existing visual-language models without necessitating additional training. Our experiments have shown that \modelname effectively lessens language bias without losing an understanding of linguistic compositionality. Overall, MASS offers a promising solution for enhancing image-text matching performance in visual-language models.

Cite

Text

Chung et al. "MASS: Overcoming Language Bias in Image-Text Matching." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I3.32262

Markdown

[Chung et al. "MASS: Overcoming Language Bias in Image-Text Matching." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/chung2025aaai-mass/) doi:10.1609/AAAI.V39I3.32262

BibTeX

@inproceedings{chung2025aaai-mass,
  title     = {{MASS: Overcoming Language Bias in Image-Text Matching}},
  author    = {Chung, Jiwan and Lim, Seungwon and Lee, Sangkyu and Yu, Youngjae},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {2591-2599},
  doi       = {10.1609/AAAI.V39I3.32262},
  url       = {https://mlanthology.org/aaai/2025/chung2025aaai-mass/}
}