Learning Signs from Subtitles: A Weakly Supervised Approach to Sign Language Recognition

Abstract

This paper introduces a fully automated, unsupervised method to recognise sign from subtitles. It does this by using data mining to align correspondences in sections of videos. Based on head and hand tracking, a novel temporally constrained adaptation of a priori mining is used to extract similar regions of video, with the aid of a proposed contextual negative selection method. These regions are refined in the temporal domain to isolate the occurrences of similar signs in each example. The system is shown to automatically identify and segment signs from standard news broadcasts containing a variety of topics.

Cite

Text

Cooper and Bowden. "Learning Signs from Subtitles: A Weakly Supervised Approach to Sign Language Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009. doi:10.1109/CVPR.2009.5206647

Markdown

[Cooper and Bowden. "Learning Signs from Subtitles: A Weakly Supervised Approach to Sign Language Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009.](https://mlanthology.org/cvpr/2009/cooper2009cvpr-learning/) doi:10.1109/CVPR.2009.5206647

BibTeX

@inproceedings{cooper2009cvpr-learning,
  title     = {{Learning Signs from Subtitles: A Weakly Supervised Approach to Sign Language Recognition}},
  author    = {Cooper, Helen and Bowden, Richard},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2009},
  pages     = {2568-2574},
  doi       = {10.1109/CVPR.2009.5206647},
  url       = {https://mlanthology.org/cvpr/2009/cooper2009cvpr-learning/}
}