Disjoint Mapping Network for Cross-Modal Matching of Voices and Faces

Abstract

We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces. Different from the existing methods, DIMNet does not explicitly learn the joint relationship between the modalities. Instead, DIMNet learns a shared representation for different modalities by mapping them individually to their common covariates. These shared representations can then be used to find the correspondences between the modalities. We show empirically that DIMNet is able to achieve better performance than the current state-of-the-art methods, with the additional benefits of being conceptually simpler and less data-intensive.

Cite

Text

Wen et al. "Disjoint Mapping Network for Cross-Modal Matching of Voices and Faces." International Conference on Learning Representations, 2019.

Markdown

[Wen et al. "Disjoint Mapping Network for Cross-Modal Matching of Voices and Faces." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/wen2019iclr-disjoint/)

BibTeX

@inproceedings{wen2019iclr-disjoint,
  title     = {{Disjoint Mapping Network for Cross-Modal Matching of Voices and Faces}},
  author    = {Wen, Yandong and Al Ismail, Mahmoud and Liu, Weiyang and Raj, Bhiksha and Singh, Rita},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/wen2019iclr-disjoint/}
}