Few-Shot Hash Learning for Image Retrieval

Abstract

Current approaches to hash based semantic image retrieval assume a set of pre-defined categories and rely on supervised learning from a large number of annotated samples. The need for labeled samples limits their applicability in scenarios in which a user provides at query time a small set of training images defining a customized novel category. This paper addresses the problem of few-shot hash learning, in the spirit of one-shot learning in image recognition and classification and early work on locality sensitive hashing. More precisely, our approach is based on the insight that universal hash functions can be learned off-line from unlabeled data because of the information implicit in the density structure of a discriminative feature space. We can then select a task-specific combination of hash codes for a novel category from a few labeled samples. The resulting unsupervised generic hashing (UGH) significantly outperforms current supervised and unsupervised hashing approaches on image retrieval tasks with small training samples.

Cite

Text

Gui et al. "Few-Shot Hash Learning for Image Retrieval." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.148

Markdown

[Gui et al. "Few-Shot Hash Learning for Image Retrieval." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/gui2017iccvw-fewshot/) doi:10.1109/ICCVW.2017.148

BibTeX

@inproceedings{gui2017iccvw-fewshot,
  title     = {{Few-Shot Hash Learning for Image Retrieval}},
  author    = {Gui, Liangke and Wang, Yu-Xiong and Hebert, Martial},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2017},
  pages     = {1228-1237},
  doi       = {10.1109/ICCVW.2017.148},
  url       = {https://mlanthology.org/iccvw/2017/gui2017iccvw-fewshot/}
}