Multi-View Visual Semantic Embedding
Abstract
Visual Semantic Embedding (VSE) is a dominant method for cross-modal vision-language retrieval. Its purpose is to learn an embedding space so that visual data can be embedded in a position close to the corresponding text description. However, there are large intra-class variations in the vision-language data. For example, multiple texts describing the same image may be described from different views, and the descriptions of different views are often dissimilar. The mainstream VSE method embeds samples from the same class in similar positions, which will suppress intra-class variations and lead to inferior generalization performance. This paper proposes a Multi-View Visual Semantic Embedding (MV-VSE) framework, which learns multiple embeddings for one visual data and explicitly models intra-class variations. To optimize MV-VSE, a multi-view upper bound loss is proposed, and the multi-view embeddings are jointly optimized while retaining intra-class variations. MV-VSE is plug-and-play and can be applied to various VSE models and loss functions without excessively increasing model complexity. Experimental results on the Flickr30K and MS-COCO datasets demonstrate the superior performance of our framework.
Cite
Text
Li et al. "Multi-View Visual Semantic Embedding." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/158Markdown
[Li et al. "Multi-View Visual Semantic Embedding." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/li2022ijcai-multi/) doi:10.24963/IJCAI.2022/158BibTeX
@inproceedings{li2022ijcai-multi,
title = {{Multi-View Visual Semantic Embedding}},
author = {Li, Zheng and Guo, Caili and Feng, Zerun and Hwang, Jenq-Neng and Xue, Xijun},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {1130-1136},
doi = {10.24963/IJCAI.2022/158},
url = {https://mlanthology.org/ijcai/2022/li2022ijcai-multi/}
}