Unsupervised Semantic Parsing of Video Collections
Abstract
Human communication typically has an underlying structure. This is reflected in the fact that in many user generated videos, a starting point, ending, and certain objective steps between these two can be identified. In this paper, we propose a method for parsing a video into such semantic steps in an unsupervised way. The proposed method is capable of providing a semantic ``storyline'' of the video composed of its objective steps. We accomplish this utilizing both visual and language cues in a joint generative model. The proposed method can also provide a textual description for each of identified semantic steps and video segments. We evaluate this method on a large number of complex YouTube videos and show results of unprecedented quality for this new and impactful problem.
Cite
Text
Sener et al. "Unsupervised Semantic Parsing of Video Collections." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.509Markdown
[Sener et al. "Unsupervised Semantic Parsing of Video Collections." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/sener2015iccv-unsupervised/) doi:10.1109/ICCV.2015.509BibTeX
@inproceedings{sener2015iccv-unsupervised,
title = {{Unsupervised Semantic Parsing of Video Collections}},
author = {Sener, Ozan and Zamir, Amir R. and Savarese, Silvio and Saxena, Ashutosh},
booktitle = {International Conference on Computer Vision},
year = {2015},
doi = {10.1109/ICCV.2015.509},
url = {https://mlanthology.org/iccv/2015/sener2015iccv-unsupervised/}
}