Deep Binaries: Encoding Semantic-Rich Cues for Efficient Textual-Visual Cross Retrieval
Abstract
Cross-modal hashing is usually regarded as an effective technique for large-scale textual-visual cross retrieval, where data from different modalities are mapped into a shared Hamming space for matching. Most of the traditional textual-visual binary encoding methods only consider holistic image representations and fail to model descriptive sentences. This renders existing methods inappropriate to handle the rich semantics of informative cross-modal data for quality textual-visual search tasks. To address the problem of hashing cross-modal data with semantic-rich cues, in this paper, a novel integrated deep architecture is developed to effectively encode the detailed semantics of informative images and long descriptive sentences, named as Textual-Visual Deep Binaries (TVDB). In particular, region-based convolutional networks with long short-term memory units are introduced to fully explore image regional details while semantic cues of sentences are modeled by a text convolutional network. Additionally, we propose a stochastic batch-wise training routine, where high-quality binary codes and deep encoding functions are efficiently optimized in an alternating manner. Experiments are conducted on three multimedia datasets, i.e. Microsoft COCO, IAPR TC-12, and INRIA Web Queries, where the proposed TVDB model significantly outperforms state-of-the-art binary coding methods in the task of cross-modal retrieval.
Cite
Text
Shen et al. "Deep Binaries: Encoding Semantic-Rich Cues for Efficient Textual-Visual Cross Retrieval." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.441Markdown
[Shen et al. "Deep Binaries: Encoding Semantic-Rich Cues for Efficient Textual-Visual Cross Retrieval." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/shen2017iccv-deep/) doi:10.1109/ICCV.2017.441BibTeX
@inproceedings{shen2017iccv-deep,
title = {{Deep Binaries: Encoding Semantic-Rich Cues for Efficient Textual-Visual Cross Retrieval}},
author = {Shen, Yuming and Liu, Li and Shao, Ling and Song, Jingkuan},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.441},
url = {https://mlanthology.org/iccv/2017/shen2017iccv-deep/}
}