Learning Saliency from Fixations
Abstract
We present a novel approach for saliency prediction in images, leveraging parallel decoding in transformers to learn saliency solely from fixation maps. Models typically rely on continuous saliency maps, to overcome the difficulty of optimizing for the discrete fixation map. We attempt to replicate the experimental setup that generates saliency datasets. Our approach treats saliency prediction as a direct set prediction problem, via a global loss that enforces unique fixations prediction through bipartite matching and a transformer encoder-decoder architecture. By utilizing a fixed set of learned fixation queries, the cross-attention reasons over the image features to directly output the fixation points, distinguishing it from other modern saliency predictors. Our approach, named Saliency TRansformer (SalTR) achieves remarkable results on the Salicon benchmark.
Cite
Text
Djilali et al. "Learning Saliency from Fixations." Winter Conference on Applications of Computer Vision, 2024.Markdown
[Djilali et al. "Learning Saliency from Fixations." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/djilali2024wacv-learning/)BibTeX
@inproceedings{djilali2024wacv-learning,
title = {{Learning Saliency from Fixations}},
author = {Djilali, Yasser Abdelaziz Dahou and McGuinness, Kevin and O’Connor, Noel},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2024},
pages = {383-393},
url = {https://mlanthology.org/wacv/2024/djilali2024wacv-learning/}
}