EigenJoints-Based Action Recognition Using Naïve-Bayes-Nearest-Neighbor
Abstract
In this paper, we propose an effective method to recognize human actions from 3D positions of body joints. With the release of RGBD sensors and associated SDK, human body joints can be extracted in real time with reasonable accuracy. In our method, we propose a new type of features based on position differences of joints, EigenJoints, which combine action information including static posture, motion, and offset. We further employ the Naïve-Bayes-Nearest-Neighbor (NBNN) classifier for multi-class action classification. The recognition results on the Microsoft Research (MSR) Action3D dataset demonstrate that our approach significantly outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to recognize actions on the MSR Action3D dataset. We observe 15-20 frames are sufficient to achieve comparable results to that using the entire video sequences.
Cite
Text
Yang and Tian. "EigenJoints-Based Action Recognition Using Naïve-Bayes-Nearest-Neighbor." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012. doi:10.1109/CVPRW.2012.6239232Markdown
[Yang and Tian. "EigenJoints-Based Action Recognition Using Naïve-Bayes-Nearest-Neighbor." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012.](https://mlanthology.org/cvprw/2012/yang2012cvprw-eigenjointsbased/) doi:10.1109/CVPRW.2012.6239232BibTeX
@inproceedings{yang2012cvprw-eigenjointsbased,
title = {{EigenJoints-Based Action Recognition Using Naïve-Bayes-Nearest-Neighbor}},
author = {Yang, Xiaodong and Tian, Yingli},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2012},
pages = {14-19},
doi = {10.1109/CVPRW.2012.6239232},
url = {https://mlanthology.org/cvprw/2012/yang2012cvprw-eigenjointsbased/}
}