A Learning Framework for Nearest Neighbor Search

Abstract

Can we leverage learning techniques to build a fast nearest-neighbor (NN) retrieval data structure? We present a general learning framework for the NN problem in which sample queries are used to learn the parameters of a data structure that minimize the retrieval time and/or the miss rate. We explore the potential of this novel framework through two popular NN data structures: KD-trees and the rectilinear structures employed by locality sensitive hashing. We derive a generalization theory for these data structure classes and present simple learning algorithms for both. Experimental results reveal that learning often improves on the already strong performance of these data structures.

Cite

Text

Cayton and Dasgupta. "A Learning Framework for Nearest Neighbor Search." Neural Information Processing Systems, 2007.

Markdown

[Cayton and Dasgupta. "A Learning Framework for Nearest Neighbor Search." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/cayton2007neurips-learning/)

BibTeX

@inproceedings{cayton2007neurips-learning,
  title     = {{A Learning Framework for Nearest Neighbor Search}},
  author    = {Cayton, Lawrence and Dasgupta, Sanjoy},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {233-240},
  url       = {https://mlanthology.org/neurips/2007/cayton2007neurips-learning/}
}