Learning Dynamics for Exemplar-Based Gesture Recognition
Abstract
This paper addresses the problem of capturing the dynamics for exemplar-based recognition systems. Traditional HMM provides a probabilistic tool to capture system dynamics and in exemplar paradigm, HMM states are typically coupled with the exemplars. Alternatively, we propose a non-parametric HMM approach that uses a discrete HMM with arbitrary states (decoupled from exemplars) to capture the dynamics over a large exemplar space where a nonparametric estimation approach is used to model the exemplar distribution. This reduces the need for lengthy and non-optimal training of the HMM observation model. We used the proposed approach for view-based recognition of gestures. The approach is based on representing each gesture as a sequence of learned body poses (exemplars). The gestures are recognized through a probabilistic framework for matching these body poses and for imposing temporal constraints between different poses using the proposed non-parametric HMM.
Cite
Text
Elgammal et al. "Learning Dynamics for Exemplar-Based Gesture Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2003. doi:10.1109/CVPR.2003.1211405Markdown
[Elgammal et al. "Learning Dynamics for Exemplar-Based Gesture Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2003.](https://mlanthology.org/cvpr/2003/elgammal2003cvpr-learning/) doi:10.1109/CVPR.2003.1211405BibTeX
@inproceedings{elgammal2003cvpr-learning,
title = {{Learning Dynamics for Exemplar-Based Gesture Recognition}},
author = {Elgammal, Ahmed M. and Shet, Vinay D. and Yacoob, Yaser and Davis, Larry S.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2003},
pages = {571-578},
doi = {10.1109/CVPR.2003.1211405},
url = {https://mlanthology.org/cvpr/2003/elgammal2003cvpr-learning/}
}