Multi-View Multi-Label Canonical Correlation Analysis for Cross-Modal Matching and Retrieval

Abstract

In this paper, we address the problem of cross-modal retrieval in presence of multi-view and multi-label data. For this, we present Multi-view Multi-label Canonical Correlation Analysis (or MVMLCCA), which is a generalization of CCA for multi-view data that also makes use of high-level semantic information available in the form of multi-label annotations in each view. While CCA relies on explicit pairings/associations of samples between two views (or modalities), MVMLCCA uses the available multi-label annotations to establish correspondence across multiple (two or more) views without the need of explicit pairing of multi-view samples. Extensive experiments on two multi-modal datasets demonstrate that the proposed approach offers much more flexibility than the related approaches without compromising on scalability and cross-modal retrieval performance. Our code and precomputed features are available at https://github.com/Rushil231100/MVMLCCA.

Cite

Text

Sanghavi and Verma. "Multi-View Multi-Label Canonical Correlation Analysis for Cross-Modal Matching and Retrieval." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00516

Markdown

[Sanghavi and Verma. "Multi-View Multi-Label Canonical Correlation Analysis for Cross-Modal Matching and Retrieval." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/sanghavi2022cvprw-multiview/) doi:10.1109/CVPRW56347.2022.00516

BibTeX

@inproceedings{sanghavi2022cvprw-multiview,
  title     = {{Multi-View Multi-Label Canonical Correlation Analysis for Cross-Modal Matching and Retrieval}},
  author    = {Sanghavi, Rushil Kaushal and Verma, Yashaswi},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {4700-4709},
  doi       = {10.1109/CVPRW56347.2022.00516},
  url       = {https://mlanthology.org/cvprw/2022/sanghavi2022cvprw-multiview/}
}