An Efficient Parallel Strategy for Matching Visual Self-Similarities in Large Image Databases

Abstract

Due to high interest of social online systems, there exists a huge and still increasing amount of image data in the web. In order to handle this massive amount of visual information, algorithms often need to be redesigned. In this work, we developed an efficient approach to find visual similarities between images that runs completely on GPU and is applicable to large image databases. Based on local self-similarity descriptors, the approach finds similarities even across modalities. Given a set of images, a database is created by storing all descriptors in an arrangement suitable for parallel GPU-based comparison. A novel voting-scheme further considers the spatial layout of descriptors with hardly any overhead. Thousands of images are searched in only a few seconds. We apply our algorithm to cluster a set of image responses to identify various senses of ambiguous words and re-tag similar images with missing tags.

Cite

Text

Schwarz et al. "An Efficient Parallel Strategy for Matching Visual Self-Similarities in Large Image Databases." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33863-2_28

Markdown

[Schwarz et al. "An Efficient Parallel Strategy for Matching Visual Self-Similarities in Large Image Databases." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/schwarz2012eccv-efficient/) doi:10.1007/978-3-642-33863-2_28

BibTeX

@inproceedings{schwarz2012eccv-efficient,
  title     = {{An Efficient Parallel Strategy for Matching Visual Self-Similarities in Large Image Databases}},
  author    = {Schwarz, Katharina and Häußler, Tobias and Lensch, Hendrik P. A.},
  booktitle = {European Conference on Computer Vision},
  year      = {2012},
  pages     = {281-290},
  doi       = {10.1007/978-3-642-33863-2_28},
  url       = {https://mlanthology.org/eccv/2012/schwarz2012eccv-efficient/}
}