Automatic Annotation of Human Actions in Video

Abstract

This paper addresses the problem of automatic temporal annotation of realistic human actions in video using minimal manual supervision. To this end we consider two associated problems: (a) weakly-supervised learning of action models from readily available annotations, and (b) temporal localization of human actions in test videos. To avoid the prohibitive cost of manual annotation for training, we use movie scripts as a means of weak supervision. Scripts, however, provide only implicit, noisy, and imprecise information about the type and location of actions in video. We address this problem with a kernel-based discriminative clustering algorithm that locates actions in the weakly-labeled training data. Using the obtained action samples, we train temporal action detectors and apply them to locate actions in the raw video data. Our experiments demonstrate that the proposed method for weakly-supervised learning of action models leads to significant improvement in action detection. We present detection results for three action classes in four feature length movies with challenging and realistic video data.

Cite

Text

Duchenne et al. "Automatic Annotation of Human Actions in Video." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459279

Markdown

[Duchenne et al. "Automatic Annotation of Human Actions in Video." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/duchenne2009iccv-automatic/) doi:10.1109/ICCV.2009.5459279

BibTeX

@inproceedings{duchenne2009iccv-automatic,
  title     = {{Automatic Annotation of Human Actions in Video}},
  author    = {Duchenne, Olivier and Laptev, Ivan and Sivic, Josef and Bach, Francis R. and Ponce, Jean},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2009},
  pages     = {1491-1498},
  doi       = {10.1109/ICCV.2009.5459279},
  url       = {https://mlanthology.org/iccv/2009/duchenne2009iccv-automatic/}
}