Backtracking ScSPM Image Classifier for Weakly Supervised Top-Down Saliency
Abstract
Top-down saliency models produce a probability map that peaks at target locations specified by a task/goal such as object detection. They are usually trained in a supervised setting involving annotations of objects. We propose a weakly supervised top-down saliency framework using only binary labels that indicate the presence/absence of an object in an image. First, the probabilistic contribution of each image patch to the confidence of an ScSPM-based classifier produces a Reverse-ScSPM (R-ScSPM) saliency map. Neighborhood information is then incorporated through a contextual saliency map which is estimated using logistic regression learnt on patches having high R-ScSPM saliency. Both the saliency maps are combined to obtain the final saliency map. We evaluate the performance of the proposed weakly supervised top-down saliency and achieves comparable performance with fully supervised approaches. Experiments are carried out on 5 challenging datasets across 3 different applications.
Cite
Text
Cholakkal et al. "Backtracking ScSPM Image Classifier for Weakly Supervised Top-Down Saliency." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.570Markdown
[Cholakkal et al. "Backtracking ScSPM Image Classifier for Weakly Supervised Top-Down Saliency." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/cholakkal2016cvpr-backtracking/) doi:10.1109/CVPR.2016.570BibTeX
@inproceedings{cholakkal2016cvpr-backtracking,
title = {{Backtracking ScSPM Image Classifier for Weakly Supervised Top-Down Saliency}},
author = {Cholakkal, Hisham and Johnson, Jubin and Rajan, Deepu},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.570},
url = {https://mlanthology.org/cvpr/2016/cholakkal2016cvpr-backtracking/}
}