Story-Driven Summarization for Egocentric Video

Abstract

We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video subshots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a randomwalk based metric of influence between subshots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subshot summary. Whereas traditional methods optimize a summary's diversity or representativeness, ours explicitly accounts for how one sub-event "leads to" another--which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.

Cite

Text

Lu and Grauman. "Story-Driven Summarization for Egocentric Video." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.350

Markdown

[Lu and Grauman. "Story-Driven Summarization for Egocentric Video." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/lu2013cvpr-storydriven/) doi:10.1109/CVPR.2013.350

BibTeX

@inproceedings{lu2013cvpr-storydriven,
  title     = {{Story-Driven Summarization for Egocentric Video}},
  author    = {Lu, Zheng and Grauman, Kristen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2013},
  doi       = {10.1109/CVPR.2013.350},
  url       = {https://mlanthology.org/cvpr/2013/lu2013cvpr-storydriven/}
}