Asymmetric Deep Supervised Hashing

Abstract

Hashing has been widely used for large-scale approximate nearest neighbor search because of its storage and search efficiency. Recent work has found that deep supervised hashing can significantly outperform non-deep supervised hashing in many applications. However, most existing deep supervised hashing methods adopt a symmetric strategy to learn one deep hash function for both query points and database (retrieval) points. The training of these symmetric deep supervised hashing methods is typically time-consuming, which makes them hard to effectively utilize the supervised information for cases with large-scale database. In this paper, we propose a novel deep supervised hashing method, called asymmetric deep supervised hashing (ADSH), for large-scale nearest neighbor search. ADSH treats the query points and database points in an asymmetric way. More specifically, ADSH learns a deep hash function only for query points, while the hash codes for database points are directly learned. The training of ADSH is much more efficient than that of traditional symmetric deep supervised hashing methods. Experiments show that ADSH can achieve state-of-the-art performance in real applications.

Cite

Text

Jiang and Li. "Asymmetric Deep Supervised Hashing." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11814

Markdown

[Jiang and Li. "Asymmetric Deep Supervised Hashing." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/jiang2018aaai-asymmetric/) doi:10.1609/AAAI.V32I1.11814

BibTeX

@inproceedings{jiang2018aaai-asymmetric,
  title     = {{Asymmetric Deep Supervised Hashing}},
  author    = {Jiang, Qing-Yuan and Li, Wu-Jun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {3342-3349},
  doi       = {10.1609/AAAI.V32I1.11814},
  url       = {https://mlanthology.org/aaai/2018/jiang2018aaai-asymmetric/}
}