Cross-Modality Binary Code Learning via Fusion Similarity Hashing
Abstract
Binary code learning has been emerging topic in large-scale cross-modality retrieval recently. It aims to map features from multiple modalities into a common Hamming space, where the cross-modality similarity can be approximated efficiently via Hamming distance. To this end, most existing works learn binary codes directly from data instances in multiple modalities, which preserve both intra- and inter-modal similarities respectively. Few methods consider to preserve the "fusion similarity" among multi-modal instances instead, which can explicitly capture their heterogeneous correlation in cross-modality retrieval. In this paper, we propose a hashing scheme, termed Fusion Similarity Hashing (FSH), which explicitly embeds the graph-based fusion similarity across modalities into a common Hamming space. Inspired by the "fusion by diffusion", our core idea is to construct an undirected asymmetric graph to model the fusion similarity among different modalities, upon which a graph hashing scheme with alternating optimization is introduced to learn binary codes that embeds such fusion similarity. Quantitative evaluations on three widely used benchmarks, i.e., UCI Handwritten Digit, MIR-Flickr25K and NUS-WIDE, demonstrate that the proposed FSH approach can achieve superior performance over the state-of-the-art methods.
Cite
Text
Liu et al. "Cross-Modality Binary Code Learning via Fusion Similarity Hashing." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.672Markdown
[Liu et al. "Cross-Modality Binary Code Learning via Fusion Similarity Hashing." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/liu2017cvpr-crossmodality/) doi:10.1109/CVPR.2017.672BibTeX
@inproceedings{liu2017cvpr-crossmodality,
title = {{Cross-Modality Binary Code Learning via Fusion Similarity Hashing}},
author = {Liu, Hong and Ji, Rongrong and Wu, Yongjian and Huang, Feiyue and Zhang, Baochang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.672},
url = {https://mlanthology.org/cvpr/2017/liu2017cvpr-crossmodality/}
}