Natural Supervised Hashing

Abstract

Among learning-based hashing methods, supervised hashing tries to find hash codes which preserve semantic similarities of original data. Recent years have witnessed much efforts devoted to design objective functions and optimization methods for supervised hashing learning, in order to improve search accuracy and reduce training cost. In this paper, we propose a very straightforward supervised hashing algorithm and demonstrate its superiority over several state-of-the-art methods. The key idea of our approach is to treat label vectors as binary codes and to learn target codes which have similar structure to label vectors. To circumvent direct optimization on large Gram matrices, we identify an inner-product-preserving transformation and use it to bring close label vectors and hash codes without changing the structure. The optimization process is very efficient and scales well. In our experiment, training 16-bit and 96-bit code on NUS-WIDE cost respectively only 3 and 6 minutes. PDF

Cite

Text

Liu and Lu. "Natural Supervised Hashing." International Joint Conference on Artificial Intelligence, 2016.

Markdown

[Liu and Lu. "Natural Supervised Hashing." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/liu2016ijcai-natural/)

BibTeX

@inproceedings{liu2016ijcai-natural,
  title     = {{Natural Supervised Hashing}},
  author    = {Liu, Qi and Lu, Hongtao},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {1788-1794},
  url       = {https://mlanthology.org/ijcai/2016/liu2016ijcai-natural/}
}