Deep Correlation for Matching Images and Text

Abstract

This paper addresses the problem of matching images and captions in a joint latent space learnt with deep canonical correlation analysis (DCCA). The image and caption data are represented by the outputs of the vision and text based deep neural networks. The high dimensionality of the features presents a great challenge in terms of memory and speed complexity when used in DCCA framework. We address these problems by a GPU implementation and propose methods to deal with overfitting. This makes it possible to evaluate DCCA approach on popular caption-image matching benchmarks. We compare our approach to other recently proposed techniques and present state of the art results on three datasets.

Cite

Text

Yan and Mikolajczyk. "Deep Correlation for Matching Images and Text." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298966

Markdown

[Yan and Mikolajczyk. "Deep Correlation for Matching Images and Text." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/yan2015cvpr-deep/) doi:10.1109/CVPR.2015.7298966

BibTeX

@inproceedings{yan2015cvpr-deep,
  title     = {{Deep Correlation for Matching Images and Text}},
  author    = {Yan, Fei and Mikolajczyk, Krystian},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2015},
  doi       = {10.1109/CVPR.2015.7298966},
  url       = {https://mlanthology.org/cvpr/2015/yan2015cvpr-deep/}
}