Towards Semantic Fast-Forward and Stabilized Egocentric Videos
Abstract
The emergence of low-cost personal mobiles devices and wearable cameras and the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Since most of the recorded videos compose long-running streams with unedited content, they are tedious and unpleasant to watch. The fast-forward state-of-the-art methods are facing challenges of balancing the smoothness of the video and the emphasis in the relevant frames given a speed-up rate. In this work, we present a methodology capable of summarizing and stabilizing egocentric videos by extracting the semantic information from the frames. This paper also describes a dataset collection with several semantically labeled videos and introduces a new smoothness evaluation metric for egocentric videos that is used to test our method.
Cite
Text
Silva et al. "Towards Semantic Fast-Forward and Stabilized Egocentric Videos." European Conference on Computer Vision Workshops, 2016. doi:10.1007/978-3-319-46604-0_40Markdown
[Silva et al. "Towards Semantic Fast-Forward and Stabilized Egocentric Videos." European Conference on Computer Vision Workshops, 2016.](https://mlanthology.org/eccvw/2016/silva2016eccvw-semantic/) doi:10.1007/978-3-319-46604-0_40BibTeX
@inproceedings{silva2016eccvw-semantic,
title = {{Towards Semantic Fast-Forward and Stabilized Egocentric Videos}},
author = {Silva, Michel Melo and Ramos, Washington Luis Souza and Ferreira, João Pedro Klock and Campos, Mario Fernando Montenegro and do Nascimento, Erickson Rangel},
booktitle = {European Conference on Computer Vision Workshops},
year = {2016},
pages = {557-571},
doi = {10.1007/978-3-319-46604-0_40},
url = {https://mlanthology.org/eccvw/2016/silva2016eccvw-semantic/}
}