Super Normal Vector for Activity Recognition Using Depth Sequences

Abstract

This paper presents a new framework for human activity recognition from video sequences captured by a depth camera. We cluster hypersurface normals in a depth sequence to form the polynormal which is used to jointly characterize the local motion and shape information. In order to globally capture the spatial and temporal orders, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time grids. We then propose a novel scheme of aggregating the low-level polynormals into the super normal vector (SNV) which can be seen as a simplified version of the Fisher kernel representation. In the extensive experiments, we achieve classification results superior to all previous published results on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.

Cite

Text

Yang and Tian. "Super Normal Vector for Activity Recognition Using Depth Sequences." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.108

Markdown

[Yang and Tian. "Super Normal Vector for Activity Recognition Using Depth Sequences." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/yang2014cvpr-super/) doi:10.1109/CVPR.2014.108

BibTeX

@inproceedings{yang2014cvpr-super,
  title     = {{Super Normal Vector for Activity Recognition Using Depth Sequences}},
  author    = {Yang, Xiaodong and Tian, YingLi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.108},
  url       = {https://mlanthology.org/cvpr/2014/yang2014cvpr-super/}
}