Space-Time Video Montage

Abstract

Conventional video summarization methods focus predominantly on summarizing videos along the time axis, such as building a movie trailer: The resulting video trailer tends to retain much empty space in the background of the video frames while discarding much informative video content due to size limit. In this paper we propose a novel spacetime video summarization method which we call space-time video montage. The method simultaneously analyzes both the spatial and temporal injbrmation distribution in a video sequence, and extracts the visually informative space-time portions of the input videos. The informative video porlions are represented in volumetric layers. The layers are then packrd together in a smull ouzput video volume such that the total amount of visual information in the video volume is maximized. To achieve the packing process, we develop a new algorithm based upon the first-fit and Graph cut optimization techniques. Since our method is uble to cut off spatially und temporally less informative portions, it is uble to generate much more compact yet highly informative output videos. The effecliveness of our method is validated by extensive experiments over a wide variety of videos.

Cite

Text

Kang et al. "Space-Time Video Montage." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006. doi:10.1109/CVPR.2006.284

Markdown

[Kang et al. "Space-Time Video Montage." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006.](https://mlanthology.org/cvpr/2006/kang2006cvpr-space/) doi:10.1109/CVPR.2006.284

BibTeX

@inproceedings{kang2006cvpr-space,
  title     = {{Space-Time Video Montage}},
  author    = {Kang, Hong-Wen and Matsushita, Yasuyuki and Tang, Xiaoou and Chen, Xue-Quan},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2006},
  pages     = {1331-1338},
  doi       = {10.1109/CVPR.2006.284},
  url       = {https://mlanthology.org/cvpr/2006/kang2006cvpr-space/}
}