Fine-Grained Semantically Aligned Vision-Language Pre-Training
Abstract
Large-scale vision-language pre-training has shown impressive advances in a wide range of downstream tasks. Existing methods mainly model the cross-modal alignment by the similarity of the global representations of images and text, or advanced cross-modal attention upon image and text features. However, they fail to explicitly learn the fine-grained semantic alignment between visual regions and textual phrases, as only global image-text alignment information is available. In this paper, we introduce LOUPE, a fine-grained semantically aLigned visiOn-langUage PrE-training framework, which learns fine-grained semantic alignment from the novel perspective of game-theoretic interactions. To efficiently estimate the game-theoretic interactions, we further propose an uncertainty-aware neural Shapley interaction learning module. Experiments show that LOUPE achieves state-of-the-art performance on a variety of vision-language tasks. Without any object-level human annotations and fine-tuning, LOUPE achieves competitive performance on object detection and visual grounding. More importantly, LOUPE opens a new promising direction of learning fine-grained semantics from large-scale raw image-text pairs.
Cite
Text
Li et al. "Fine-Grained Semantically Aligned Vision-Language Pre-Training." Neural Information Processing Systems, 2022.Markdown
[Li et al. "Fine-Grained Semantically Aligned Vision-Language Pre-Training." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/li2022neurips-finegrained/)BibTeX
@inproceedings{li2022neurips-finegrained,
title = {{Fine-Grained Semantically Aligned Vision-Language Pre-Training}},
author = {Li, Juncheng and He, Xin and Wei, Longhui and Qian, Long and Zhu, Linchao and Xie, Lingxi and Zhuang, Yueting and Tian, Qi and Tang, Siliang},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/li2022neurips-finegrained/}
}