Asymmetric Sparse Kernel Approximations for Large-Scale Visual Search

Abstract

We introduce an asymmetric sparse approximate embedding optimized for fast kernel comparison operations arising in large-scale visual search. In contrast to other methods that perform an explicit approximate embedding using kernel PCA followed by a distance compression technique in R^d, which loses information at both steps, our method utilizes the implicit kernel representation directly. In addition, we empirically demonstrate that our method needs no explicit training step and can operate with a dictionary of random exemplars from the dataset. We evaluate our method on three benchmark image retrieval datasets: SIFT1M, ImageNet, and 80M-TinyImages.

Cite

Text

Davis et al. "Asymmetric Sparse Kernel Approximations for Large-Scale Visual Search." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.271

Markdown

[Davis et al. "Asymmetric Sparse Kernel Approximations for Large-Scale Visual Search." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/davis2014cvpr-asymmetric/) doi:10.1109/CVPR.2014.271

BibTeX

@inproceedings{davis2014cvpr-asymmetric,
  title     = {{Asymmetric Sparse Kernel Approximations for Large-Scale Visual Search}},
  author    = {Davis, Damek and Balzer, Jonathan and Soatto, Stefano},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.271},
  url       = {https://mlanthology.org/cvpr/2014/davis2014cvpr-asymmetric/}
}