Continuous Body and Hand Gesture Recognition for Natural Human-Computer Interaction: Extended Abstract
Abstract
We present a new approach to gesture recognition that tracks body and hands simultaneously and recognizes gestures continuously from an unsegmented and unbounded input stream. Our system estimates 3D coordinates of upper body joints and classifies the appearance of hands into a set of canonical shapes. A novel multi-layered filtering technique with a temporal sliding window is developed to enable online sequence labeling and segmentation. Experimental results on the NATOPS dataset show the effectiveness of the approach. We also report on our recent work on multimodal gesture recognition and deep-hierarchical sequence representation learning that achieve the state-of-the-art performances on several real-world datasets.
Cite
Text
Song and Davis. "Continuous Body and Hand Gesture Recognition for Natural Human-Computer Interaction: Extended Abstract." International Joint Conference on Artificial Intelligence, 2015.Markdown
[Song and Davis. "Continuous Body and Hand Gesture Recognition for Natural Human-Computer Interaction: Extended Abstract." International Joint Conference on Artificial Intelligence, 2015.](https://mlanthology.org/ijcai/2015/song2015ijcai-continuous/)BibTeX
@inproceedings{song2015ijcai-continuous,
title = {{Continuous Body and Hand Gesture Recognition for Natural Human-Computer Interaction: Extended Abstract}},
author = {Song, Yale and Davis, Randall},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2015},
pages = {4212-4216},
url = {https://mlanthology.org/ijcai/2015/song2015ijcai-continuous/}
}