Texture-Guided Saliency Distilling for Unsupervised Salient Object Detection

Abstract

Deep Learning-based Unsupervised Salient Object Detection (USOD) mainly relies on the noisy saliency pseudo labels that have been generated from traditional handcraft methods or pre-trained networks. To cope with the noisy labels problem, a class of methods focus on only easy samples with reliable labels but ignore valuable knowledge in hard samples. In this paper, we propose a novel USOD method to mine rich and accurate saliency knowledge from both easy and hard samples. First, we propose a Confidence-aware Saliency Distilling (CSD) strategy that scores samples conditioned on samples' confidences, which guides the model to distill saliency knowledge from easy samples to hard samples progressively. Second, we propose a Boundary-aware Texture Matching (BTM) strategy to refine the boundaries of noisy labels by matching the textures around the predicted boundaries. Extensive experiments on RGB, RGB-D, RGB-T, and video SOD benchmarks prove that our method achieves state-of-the-art USOD performance. Code is available at www.github.com/moothes/A2S-v2.

Cite

Text

Zhou et al. "Texture-Guided Saliency Distilling for Unsupervised Salient Object Detection." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00701

Markdown

[Zhou et al. "Texture-Guided Saliency Distilling for Unsupervised Salient Object Detection." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/zhou2023cvpr-textureguided/) doi:10.1109/CVPR52729.2023.00701

BibTeX

@inproceedings{zhou2023cvpr-textureguided,
  title     = {{Texture-Guided Saliency Distilling for Unsupervised Salient Object Detection}},
  author    = {Zhou, Huajun and Qiao, Bo and Yang, Lingxiao and Lai, Jianhuang and Xie, Xiaohua},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {7257-7267},
  doi       = {10.1109/CVPR52729.2023.00701},
  url       = {https://mlanthology.org/cvpr/2023/zhou2023cvpr-textureguided/}
}