Cross-Modal Coherence for Text-to-Image Retrieval
Abstract
Common image-text joint understanding techniques presume that images and the associated text can universally be characterized by a single implicit model. However, co-occurring images and text can be related in qualitatively different ways, and explicitly modeling it could improve the performance of current joint understanding models. In this paper, we train a Cross-Modal Coherence Model for text-to-image retrieval task. Our analysis shows that models trained with image–text coherence relations can retrieve images originally paired with target text more often than coherence-agnostic models. We also show via human evaluation that images retrieved by the proposed coherence-aware model are preferred over a coherence-agnostic baseline by a huge margin. Our findings provide insights into the ways that different modalities communicate and the role of coherence relations in capturing commonsense inferences in text and imagery.
Cite
Text
Alikhani et al. "Cross-Modal Coherence for Text-to-Image Retrieval." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I10.21285Markdown
[Alikhani et al. "Cross-Modal Coherence for Text-to-Image Retrieval." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/alikhani2022aaai-cross/) doi:10.1609/AAAI.V36I10.21285BibTeX
@inproceedings{alikhani2022aaai-cross,
title = {{Cross-Modal Coherence for Text-to-Image Retrieval}},
author = {Alikhani, Malihe and Han, Fangda and Ravi, Hareesh and Kapadia, Mubbasir and Pavlovic, Vladimir and Stone, Matthew},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {10427-10435},
doi = {10.1609/AAAI.V36I10.21285},
url = {https://mlanthology.org/aaai/2022/alikhani2022aaai-cross/}
}