SOLAR: Second-Order Loss and Attention for Image Retrieval

Abstract

Recent works in deep-learning have shown that second-order information is beneficial in many computer-vision tasks. Second-order information can be enforced both in the spatial context and the abstract feature dimensions. In this work, we explore two second-order components. One is focused on second-order spatial information to increase the performance of image descriptors, both local and global. It is used to re-weight feature maps, and thus emphasise salient image locations that are subsequently used for description. The second component is concerned with a second-order similarity (SOS) loss, that we extend to global descriptors for image retrieval, and is used to enhance the triplet loss with hard-negative mining. We validate our approach on two different tasks and datasets for image retrieval and image matching. The results show that our two second-order components complement each other, bringing significant performance improvements in both tasks and lead to state-of-the-art results across the public benchmarks. Code available at: http://github.com/tonyngjichun/SOLAR

Cite

Text

Ng et al. "SOLAR: Second-Order Loss and Attention for Image Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58595-2_16

Markdown

[Ng et al. "SOLAR: Second-Order Loss and Attention for Image Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/ng2020eccv-solar/) doi:10.1007/978-3-030-58595-2_16

BibTeX

@inproceedings{ng2020eccv-solar,
  title     = {{SOLAR: Second-Order Loss and Attention for Image Retrieval}},
  author    = {Ng, Tony and Balntas, Vassileios and Tian, Yurun and Mikolajczyk, Krystian},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58595-2_16},
  url       = {https://mlanthology.org/eccv/2020/ng2020eccv-solar/}
}