Spatio-Temporal Enhanced Sparse Feature Selection for Video Saliency Estimation
Abstract
Video saliency mechanism is crucial in the human visual system and helpful to object detection and recognition. In this paper we propose a novel video saliency model that video saliency should be both consistently salient among consecutive frames and temporally novel due to motion or appearance changes. Based on the model, temporal coherence, in addition to spatial saliency, is fully considered by introducing temporal consistence and temporal difference into sparse feature selections. Features selected spatio-temporally are enhanced and fused together to generate the proposed video saliency maps. Comparisons with several state-of-th-art methods on two public video datasets further demonstrate the effectiveness of our method.
Cite
Text
Luo and Tian. "Spatio-Temporal Enhanced Sparse Feature Selection for Video Saliency Estimation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012. doi:10.1109/CVPRW.2012.6239258Markdown
[Luo and Tian. "Spatio-Temporal Enhanced Sparse Feature Selection for Video Saliency Estimation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012.](https://mlanthology.org/cvprw/2012/luo2012cvprw-spatiotemporal/) doi:10.1109/CVPRW.2012.6239258BibTeX
@inproceedings{luo2012cvprw-spatiotemporal,
title = {{Spatio-Temporal Enhanced Sparse Feature Selection for Video Saliency Estimation}},
author = {Luo, Ye and Tian, Qi},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2012},
pages = {33-38},
doi = {10.1109/CVPRW.2012.6239258},
url = {https://mlanthology.org/cvprw/2012/luo2012cvprw-spatiotemporal/}
}