Space-Time Gestures
Abstract
A method for learning, tracking, and recognizing human gestures using a view-based approach to model articulated objects is presented. Objects are represented using sets of view models, rather than single templates. Stereotypical space-time patterns, i.e., gestures, are then matched to stored gesture patterns using dynamic time warping. Real-time performance is achieved by using special purpose correlation hardware and view prediction to prune as much of the search space as possible. Both view models and view predictions are learned from examples. Results showing tracking and recognition of human hand gestures at over 10 Hz are presented.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Cite
Text
Darrell and Pentland. "Space-Time Gestures." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1993. doi:10.1109/CVPR.1993.341109Markdown
[Darrell and Pentland. "Space-Time Gestures." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1993.](https://mlanthology.org/cvpr/1993/darrell1993cvpr-space/) doi:10.1109/CVPR.1993.341109BibTeX
@inproceedings{darrell1993cvpr-space,
title = {{Space-Time Gestures}},
author = {Darrell, Trevor and Pentland, Alex},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {1993},
pages = {335-340},
doi = {10.1109/CVPR.1993.341109},
url = {https://mlanthology.org/cvpr/1993/darrell1993cvpr-space/}
}