Query Specific Fusion for Image Retrieval

Abstract

Recent image retrieval algorithms based on local features indexed by a vocabulary tree and holistic features indexed by compact hashing codes both demonstrate excellent scalability. However, their retrieval precision may vary dramatically among queries. This motivates us to investigate how to fuse the ordered retrieval sets given by multiple retrieval methods, to further enhance the retrieval precision. Thus, we propose a graph-based query specific fusion approach where multiple retrieval sets are merged and reranked by conducting a link analysis on a fused graph. The retrieval quality of an individual method is measured by the consistency of the top candidates’ nearest neighborhoods. Hence, the proposed method is capable of adaptively integrating the strengths of the retrieval methods using local or holistic features for different queries without any supervision. Extensive experiments demonstrate competitive performance on 4 public datasets, i.e. , the UKbench , Corel-5K , Holidays and San Francisco Landmarks datasets.

Cite

Text

Zhang et al. "Query Specific Fusion for Image Retrieval." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33709-3_47

Markdown

[Zhang et al. "Query Specific Fusion for Image Retrieval." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/zhang2012eccv-query/) doi:10.1007/978-3-642-33709-3_47

BibTeX

@inproceedings{zhang2012eccv-query,
  title     = {{Query Specific Fusion for Image Retrieval}},
  author    = {Zhang, Shaoting and Yang, Ming and Cour, Timothée and Yu, Kai and Metaxas, Dimitris N.},
  booktitle = {European Conference on Computer Vision},
  year      = {2012},
  pages     = {660-673},
  doi       = {10.1007/978-3-642-33709-3_47},
  url       = {https://mlanthology.org/eccv/2012/zhang2012eccv-query/}
}