Hamming Distance Metric Learning

Abstract

Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.

Cite

Text

Norouzi et al. "Hamming Distance Metric Learning." Neural Information Processing Systems, 2012.

Markdown

[Norouzi et al. "Hamming Distance Metric Learning." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/norouzi2012neurips-hamming/)

BibTeX

@inproceedings{norouzi2012neurips-hamming,
  title     = {{Hamming Distance Metric Learning}},
  author    = {Norouzi, Mohammad and Fleet, David J and Salakhutdinov, Ruslan},
  booktitle = {Neural Information Processing Systems},
  year      = {2012},
  pages     = {1061-1069},
  url       = {https://mlanthology.org/neurips/2012/norouzi2012neurips-hamming/}
}