Detecting Engagement in Egocentric Video

Abstract

In a wearable camera video, we see what the camera wearer sees. While this makes it easy to know roughly Open image in new window , it does not immediately reveal Open image in new window . Specifically, at what moments did his focus linger, as he paused to gather more information about something he saw? Knowing this answer would benefit various applications in video summarization and augmented reality, yet prior work focuses solely on the “what” question (estimating saliency, gaze) without considering the “when” (engagement). We propose a learning-based approach that uses long-term egomotion cues to detect engagement, specifically in browsing scenarios where one frequently takes in new visual information (e.g., shopping, touring). We introduce a large, richly annotated dataset for ego-engagement that is the first of its kind. Our approach outperforms a wide array of existing methods. We show engagement can be detected well independent of both scene appearance and the camera wearer’s identity.

Cite

Text

Su and Grauman. "Detecting Engagement in Egocentric Video." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46454-1_28

Markdown

[Su and Grauman. "Detecting Engagement in Egocentric Video." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/su2016eccv-detecting/) doi:10.1007/978-3-319-46454-1_28

BibTeX

@inproceedings{su2016eccv-detecting,
  title     = {{Detecting Engagement in Egocentric Video}},
  author    = {Su, Yu-Chuan and Grauman, Kristen},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {454-471},
  doi       = {10.1007/978-3-319-46454-1_28},
  url       = {https://mlanthology.org/eccv/2016/su2016eccv-detecting/}
}