Analysis of Gesture and Action in Technical Talks for Video Indexing

Abstract

We present an automatic system for analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key frames and segment the video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing and we use active contours to automatically track these potential gestures. Given the constrained domain we define a simple "vocabulary" of actions which can easily be recognized based on the active contour shape and motion. The recognized actions provide a rich annotation of the sequence that can be used to access a condensed version of the talk from a web page.

Cite

Text

Ju et al. "Analysis of Gesture and Action in Technical Talks for Video Indexing." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1997. doi:10.1109/CVPR.1997.609386

Markdown

[Ju et al. "Analysis of Gesture and Action in Technical Talks for Video Indexing." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1997.](https://mlanthology.org/cvpr/1997/ju1997cvpr-analysis/) doi:10.1109/CVPR.1997.609386

BibTeX

@inproceedings{ju1997cvpr-analysis,
  title     = {{Analysis of Gesture and Action in Technical Talks for Video Indexing}},
  author    = {Ju, Shanon X. and Black, Michael J. and Minneman, Scott L. and Kimber, Don},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {1997},
  pages     = {595-601},
  doi       = {10.1109/CVPR.1997.609386},
  url       = {https://mlanthology.org/cvpr/1997/ju1997cvpr-analysis/}
}