Advancing Saliency Ranking with Human Fixations: Dataset Models and Benchmarks

Abstract

Saliency ranking detection (SRD) has emerged as a challenging task in computer vision aiming not only to identify salient objects within images but also to rank them based on their degree of saliency. Existing SRD datasets have been created primarily using mouse-trajectory data which inadequately captures the intricacies of human visual perception. Addressing this gap this paper introduces the first large-scale SRD dataset SIFR constructed using genuine human fixation data thereby aligning more closely with real visual perceptual processes. To establish a baseline for this dataset we propose QAGNet a novel model that leverages salient instance query features from a transformer detector within a tri-tiered nested graph. Through extensive experiments we demonstrate that our approach outperforms existing state-of-the-art methods across two widely used SRD datasets and our newly proposed dataset. Code and dataset are available at https://github.com/EricDengbowen/QAGNet.

Cite

Text

Deng et al. "Advancing Saliency Ranking with Human Fixations: Dataset Models and Benchmarks." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02678

Markdown

[Deng et al. "Advancing Saliency Ranking with Human Fixations: Dataset Models and Benchmarks." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/deng2024cvpr-advancing/) doi:10.1109/CVPR52733.2024.02678

BibTeX

@inproceedings{deng2024cvpr-advancing,
  title     = {{Advancing Saliency Ranking with Human Fixations: Dataset Models and Benchmarks}},
  author    = {Deng, Bowen and Song, Siyang and French, Andrew P. and Schluppeck, Denis and Pound, Michael P.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {28348-28357},
  doi       = {10.1109/CVPR52733.2024.02678},
  url       = {https://mlanthology.org/cvpr/2024/deng2024cvpr-advancing/}
}