Exemplar-Driven Top-Down Saliency Detection via Deep Association
Abstract
Top-down saliency detection is a knowledge-driven search task. While some previous methods aim to learn this "knowledge" from category-specific data, others transfer existing annotations in a large dataset through appearance matching. In contrast, we propose in this paper a locate-by-exemplar strategy. This approach is challenging, as we only use a few exemplars (up to 4) and the appearances among the query object and the exemplars can be very different. To address it, we design a two-stage deep model to learn the intra-class association between the exemplars and query objects. The first stage is for learning object-to-object association, and the second stage is to learn background discrimination. Extensive experimental evaluations show that the proposed method outperforms different baselines and the category-specific models. In addition, we explore the influence of exemplar properties, in terms of exemplar number and quality. Furthermore, we show that the learned model is a universal model and offers great generalization to unseen objects.
Cite
Text
He et al. "Exemplar-Driven Top-Down Saliency Detection via Deep Association." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.617Markdown
[He et al. "Exemplar-Driven Top-Down Saliency Detection via Deep Association." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/he2016cvpr-exemplardriven/) doi:10.1109/CVPR.2016.617BibTeX
@inproceedings{he2016cvpr-exemplardriven,
title = {{Exemplar-Driven Top-Down Saliency Detection via Deep Association}},
author = {He, Shengfeng and Lau, Rynson W.H. and Yang, Qingxiong},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.617},
url = {https://mlanthology.org/cvpr/2016/he2016cvpr-exemplardriven/}
}