Recurrent Attentional Networks for Saliency Detection
Abstract
Convolutional-deconvolution networks can be adopted to perform end-to-end saliency detection. But, they do not work well with objects of multiple scales. To overcome such a limitation, in this work, we propose a recurrent attentional convolutional-deconvolution network (RACDNN). Using spatial transformer and recurrent network units, RACDNN is able to iteratively attend to selected image sub-regions to perform saliency refinement progressively. Besides tackling the scale problem, RACDNN can also learn context-aware features from past iterations to enhance saliency refinement in future iterations. Experiments on several challenging saliency detection datasets validate the effectiveness of RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection methods.
Cite
Text
Kuen et al. "Recurrent Attentional Networks for Saliency Detection." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.399Markdown
[Kuen et al. "Recurrent Attentional Networks for Saliency Detection." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/kuen2016cvpr-recurrent/) doi:10.1109/CVPR.2016.399BibTeX
@inproceedings{kuen2016cvpr-recurrent,
title = {{Recurrent Attentional Networks for Saliency Detection}},
author = {Kuen, Jason and Wang, Zhenhua and Wang, Gang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.399},
url = {https://mlanthology.org/cvpr/2016/kuen2016cvpr-recurrent/}
}