Gesture Recognition by Learning Local Motion Signatures

Abstract

This paper overviews a new gesture recognition framework based on learning local motion signatures (LMSs) introduced by [5]. After the generation of these LMSs computed on one individual by tracking Histograms of Oriented Gradient (HOG) [2] descriptor, we learn a codebook of video-words (i.e. clusters of LMSs) using k-means algorithm on a learning gesture video database. Then the video-words are compacted to a codebook of code-words by the Maximization of Mutual Information (MMI) algorithm. At the final step, we compare the LMSs generated for a new gesture w.r.t. the learned codebook via the k-nearest neighbors (k-NN) algorithm and a novel voting strategy. Our main contribution is the handling of the N to N mapping between code-words and gesture labels with the proposed voting strategy. Experiments have been carried out on two public gesture databases: KTH [16] and IXMAS [19]. Results show that the proposed method outperforms recent state-of-the-art methods.

Cite

Text

Kaâniche and Brémond. "Gesture Recognition by Learning Local Motion Signatures." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5539999

Markdown

[Kaâniche and Brémond. "Gesture Recognition by Learning Local Motion Signatures." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/kaaniche2010cvpr-gesture/) doi:10.1109/CVPR.2010.5539999

BibTeX

@inproceedings{kaaniche2010cvpr-gesture,
  title     = {{Gesture Recognition by Learning Local Motion Signatures}},
  author    = {Kaâniche, Mohamed Bécha and Brémond, François},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2010},
  pages     = {2745-2752},
  doi       = {10.1109/CVPR.2010.5539999},
  url       = {https://mlanthology.org/cvpr/2010/kaaniche2010cvpr-gesture/}
}