Conditional Image-Text Embedding Networks

Abstract

This paper presents an approach for grounding phrases in images which jointly learns multiple text-conditioned embeddings in a single end-to-end model. In order to differentiate text phrases into semantically distinct subspaces, we propose a concept weight branch that automatically assigns phrases to embeddings, whereas prior works predefine such assignments. Our proposed solution simplifies the representation requirements for individual embeddings and allows the underrepresented concepts to take advantage of the shared representations before feeding them into concept-specific layers. Comprehensive experiments verify the effectiveness of our approach across three phrase grounding datasets, Flickr30K Entities, ReferIt Game, and Visual Genome, where we obtain a (resp.) 4%, 3%, and 4% improvement in grounding performance over a strong region-phrase embedding baseline.

Cite

Text

Plummer et al. "Conditional Image-Text Embedding Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01258-8_16

Markdown

[Plummer et al. "Conditional Image-Text Embedding Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/plummer2018eccv-conditional/) doi:10.1007/978-3-030-01258-8_16

BibTeX

@inproceedings{plummer2018eccv-conditional,
  title     = {{Conditional Image-Text Embedding Networks}},
  author    = {Plummer, Bryan A. and Kordas, Paige and Hadi Kiapour, M. and Zheng, Shuai and Piramuthu, Robinson and Lazebnik, Svetlana},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01258-8_16},
  url       = {https://mlanthology.org/eccv/2018/plummer2018eccv-conditional/}
}