Learning Optimal Seeds for Diffusion-Based Salient Object Detection

Abstract

In diffusion-based saliency detection, an image is partitioned into superpixels and mapped to a graph, with superpixels as nodes and edge strengths proportional to superpixel similarity. Saliency information is then propagated over the graph using a diffusion process, whose equilibrium state yields the object saliency map. The optimal solution is the product of a propagation matrix and a saliency seed vector that contains a prior saliency assessment. This is obtained from either a bottom-up saliency detector or some heuristics. In this work, we propose a method to learn optimal seeds for object saliency. Two types of features are computed per superpixel: the bottom-up saliency of the superpixel region and a set of mid-level vision features informative of how likely the superpixel is to belong to an object. The combination of features that best discriminates between object and background saliency is then learned, using a large-margin formulation of the discriminant saliency principle. The propagation of the resulting saliency seeds, using a diffusion process, is finally shown to outperform the state of the art on a number of salient object detection datasets.

Cite

Text

Lu et al. "Learning Optimal Seeds for Diffusion-Based Salient Object Detection." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.357

Markdown

[Lu et al. "Learning Optimal Seeds for Diffusion-Based Salient Object Detection." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/lu2014cvpr-learning/) doi:10.1109/CVPR.2014.357

BibTeX

@inproceedings{lu2014cvpr-learning,
  title     = {{Learning Optimal Seeds for Diffusion-Based Salient Object Detection}},
  author    = {Lu, Song and Mahadevan, Vijay and Vasconcelos, Nuno},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.357},
  url       = {https://mlanthology.org/cvpr/2014/lu2014cvpr-learning/}
}