Learning the Abstract Motion Semantics of Verbs from Captioned Videos

Abstract

We propose an algorithm for learning the semantics of a (motion) verb from videos depicting the action expressed by the verb, paired with sentences describing the action participants and their roles. Acknowledging that commonalities among example videos may not exist at the level of the input features, our approximation algorithm efficiently searches the space of more abstract features for a common solution. We test our algorithm by using it to learn the semantics of a sample set of verbs; results demonstrate the usefulness of the proposed framework, while identifying directions for further improvement.

Cite

Text

Mathe et al. "Learning the Abstract Motion Semantics of Verbs from Captioned Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2008. doi:10.1109/CVPRW.2008.4563042

Markdown

[Mathe et al. "Learning the Abstract Motion Semantics of Verbs from Captioned Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2008.](https://mlanthology.org/cvprw/2008/mathe2008cvprw-learning/) doi:10.1109/CVPRW.2008.4563042

BibTeX

@inproceedings{mathe2008cvprw-learning,
  title     = {{Learning the Abstract Motion Semantics of Verbs from Captioned Videos}},
  author    = {Mathe, Stefan and Fazly, Afsaneh and Dickinson, Sven J. and Stevenson, Suzanne},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2008},
  pages     = {1-8},
  doi       = {10.1109/CVPRW.2008.4563042},
  url       = {https://mlanthology.org/cvprw/2008/mathe2008cvprw-learning/}
}