Time-Mapping Using Space-Time Saliency
Abstract
We describe a new approach for generating regular-speed, low-frame-rate (LFR) video from a high-frame-rate (HFR) input while preserving the important moments in the original. We call this time-mapping, a time-based analogy to high dynamic range to low dynamic range spatial tone-mapping. Our approach makes these contributions: (1) a robust space-time saliency method for evaluating visual importance, (2) a re-timing technique to temporally resample based on frame importance, and (3) temporal filters to enhance the rendering of salient motion. Results of our space-time saliency method on a benchmark dataset show it is state-of-the-art. In addition, the benefits of our approach to HFR-to-LFR time-mapping over more direct methods are demonstrated in a user study.
Cite
Text
Zhou et al. "Time-Mapping Using Space-Time Saliency." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.429Markdown
[Zhou et al. "Time-Mapping Using Space-Time Saliency." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/zhou2014cvpr-timemapping/) doi:10.1109/CVPR.2014.429BibTeX
@inproceedings{zhou2014cvpr-timemapping,
title = {{Time-Mapping Using Space-Time Saliency}},
author = {Zhou, Feng and Kang, Sing Bing and Cohen, Michael F.},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2014},
doi = {10.1109/CVPR.2014.429},
url = {https://mlanthology.org/cvpr/2014/zhou2014cvpr-timemapping/}
}