Learning "Forgiving" Hash Functions: Algorithms and Large Scale Tests

Abstract

The problem of efficiently finding similar items in a large corpus of high-dimensional data points arises in many real-world tasks, such as music, image, and video retrieval. Beyond the scaling difficulties that arise with lookups in large data sets, the complexity in these domains is exacerbated by an imprecise definition of similarity. In this paper, we describe a method to learn a similarity function from only weakly labeled positive examples. Once learned, this similarity function is used as the basis of a hash function to severely constrain the number of points considered for each lookup. Tested on a large real-world audio dataset, only a tiny fraction of the points are ever considered for each lookup. To increase efficiency, no comparisons in the original high-dimensional space of points are required. The performance far surpasses, in terms of both efficiency and accuracy, a state-of-the-art Locality-Sensitive-Hashing based technique for the same problem and data set.

Cite

Text

Baluja and Covell. "Learning "Forgiving" Hash Functions: Algorithms and Large Scale Tests." International Joint Conference on Artificial Intelligence, 2007.

Markdown

[Baluja and Covell. "Learning "Forgiving" Hash Functions: Algorithms and Large Scale Tests." International Joint Conference on Artificial Intelligence, 2007.](https://mlanthology.org/ijcai/2007/baluja2007ijcai-learning/)

BibTeX

@inproceedings{baluja2007ijcai-learning,
  title     = {{Learning "Forgiving" Hash Functions: Algorithms and Large Scale Tests}},
  author    = {Baluja, Shumeet and Covell, Michele},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2007},
  pages     = {2663-2669},
  url       = {https://mlanthology.org/ijcai/2007/baluja2007ijcai-learning/}
}