View-Independent Recognition of Hand Postures
Abstract
Since the human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research on view-independent object recognition. Due to the difficulties of the model-based approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of a small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks.
Cite
Text
Wu and Huang. "View-Independent Recognition of Hand Postures." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2000. doi:10.1109/CVPR.2000.854749Markdown
[Wu and Huang. "View-Independent Recognition of Hand Postures." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2000.](https://mlanthology.org/cvpr/2000/wu2000cvpr-view/) doi:10.1109/CVPR.2000.854749BibTeX
@inproceedings{wu2000cvpr-view,
title = {{View-Independent Recognition of Hand Postures}},
author = {Wu, Ying and Huang, Thomas S.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2000},
pages = {2088-2094},
doi = {10.1109/CVPR.2000.854749},
url = {https://mlanthology.org/cvpr/2000/wu2000cvpr-view/}
}