Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval

Abstract

With benefits of low storage cost and fast query speed, cross-modal hashing has received considerable attention recently. However, almost all existing methods on cross-modal hashing cannot obtain powerful hash codes due to directly utilizing hand-crafted features or ignoring heterogeneous correlations across different modalities, which will greatly degrade the retrieval performance. In this paper, we propose a novel deep cross-modal hashing method to generate compact hash codes through an end-to-end deep learning architecture, which can effectively capture the intrinsic relationships between various modalities. Our architecture integrates different types of pairwise constraints to encourage the similarities of the hash codes from an intra-modal view and an inter-modal view, respectively. Moreover, additional decorrelation constraints are introduced to this architecture, thus enhancing the discriminative ability of each hash bit. Extensive experiments show that our proposed method yields state-of-the-art results on two cross-modal retrieval datasets.

Cite

Text

Yang et al. "Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10719

Markdown

[Yang et al. "Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/yang2017aaai-pairwise/) doi:10.1609/AAAI.V31I1.10719

BibTeX

@inproceedings{yang2017aaai-pairwise,
  title     = {{Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval}},
  author    = {Yang, Erkun and Deng, Cheng and Liu, Wei and Liu, Xianglong and Tao, Dacheng and Gao, Xinbo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {1618-1625},
  doi       = {10.1609/AAAI.V31I1.10719},
  url       = {https://mlanthology.org/aaai/2017/yang2017aaai-pairwise/}
}