Learning Deep Structure-Preserving Image-Text Embeddings

Abstract

This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.

Cite

Text

Wang et al. "Learning Deep Structure-Preserving Image-Text Embeddings." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.541

Markdown

[Wang et al. "Learning Deep Structure-Preserving Image-Text Embeddings." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/wang2016cvpr-learning/) doi:10.1109/CVPR.2016.541

BibTeX

@inproceedings{wang2016cvpr-learning,
  title     = {{Learning Deep Structure-Preserving Image-Text Embeddings}},
  author    = {Wang, Liwei and Li, Yin and Lazebnik, Svetlana},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  doi       = {10.1109/CVPR.2016.541},
  url       = {https://mlanthology.org/cvpr/2016/wang2016cvpr-learning/}
}