Large Vocabularies for Keypoint-Based Representation and Matching of Image Patches

Abstract

In large visual databases, detection of prospectively similar contents requires simple and robust methods. Keypoint correspondences are a popular approach which, nevertheless, cannot detect (using typical descriptions) similarities in a wider image context, e.g. detection of similar fragments. For such capabilities, the analysis of configuration constraints is needed. We propose keypoint descriptions which (by using sets of words from large vocabularies) represent semi-local characteristics of images. Thus, similar image patches (including similarly looking objects) can be preliminarily retrieved by straightforward keypoint matching. A limited-scale experimental verification is provided. The approach can be prospectively used as a simple mid-level feature matching in large and unpredictable visual databases.

Cite

Text

Sluzek. "Large Vocabularies for Keypoint-Based Representation and Matching of Image Patches." European Conference on Computer Vision Workshops, 2012. doi:10.1007/978-3-642-33863-2_23

Markdown

[Sluzek. "Large Vocabularies for Keypoint-Based Representation and Matching of Image Patches." European Conference on Computer Vision Workshops, 2012.](https://mlanthology.org/eccvw/2012/sluzek2012eccvw-large/) doi:10.1007/978-3-642-33863-2_23

BibTeX

@inproceedings{sluzek2012eccvw-large,
  title     = {{Large Vocabularies for Keypoint-Based Representation and Matching of Image Patches}},
  author    = {Sluzek, Andrzej},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2012},
  pages     = {229-238},
  doi       = {10.1007/978-3-642-33863-2_23},
  url       = {https://mlanthology.org/eccvw/2012/sluzek2012eccvw-large/}
}