View and Style-Independent Action Manifolds for Human Activity Recognition

Abstract

We introduce a novel approach to automatically learn intuitive and compact descriptors of human body motions for activity recognition. Each action descriptor is produced, first, by applying Temporal Laplacian Eigenmaps to view-dependent videos in order to produce a stylistic invariant embedded manifold for each view separately. Then, all view-dependent manifolds are automatically combined to discover a unified representation which model in a single three dimensional space an action independently from style and viewpoint. In addition, a bidirectional nonlinear mapping function is incorporated to allow projecting actions between original and embedded spaces. The proposed framework is evaluated on a real and challenging dataset (IXMAS), which is composed of a variety of actions seen from arbitrary viewpoints. Experimental results demonstrate robustness against style and view variation and match the most accurate action recognition method.

Cite

Text

Lewandowski et al. "View and Style-Independent Action Manifolds for Human Activity Recognition." European Conference on Computer Vision, 2010. doi:10.1007/978-3-642-15567-3_40

Markdown

[Lewandowski et al. "View and Style-Independent Action Manifolds for Human Activity Recognition." European Conference on Computer Vision, 2010.](https://mlanthology.org/eccv/2010/lewandowski2010eccv-view/) doi:10.1007/978-3-642-15567-3_40

BibTeX

@inproceedings{lewandowski2010eccv-view,
  title     = {{View and Style-Independent Action Manifolds for Human Activity Recognition}},
  author    = {Lewandowski, Michal and Makris, Dimitrios and Nebel, Jean-Christophe},
  booktitle = {European Conference on Computer Vision},
  year      = {2010},
  pages     = {547-560},
  doi       = {10.1007/978-3-642-15567-3_40},
  url       = {https://mlanthology.org/eccv/2010/lewandowski2010eccv-view/}
}