P-CNN: Pose-Based CNN Features for Action Recognition

Abstract

This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art.

Cite

Text

Cheron et al. "P-CNN: Pose-Based CNN Features for Action Recognition." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.368

Markdown

[Cheron et al. "P-CNN: Pose-Based CNN Features for Action Recognition." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/cheron2015iccv-pcnn/) doi:10.1109/ICCV.2015.368

BibTeX

@inproceedings{cheron2015iccv-pcnn,
  title     = {{P-CNN: Pose-Based CNN Features for Action Recognition}},
  author    = {Cheron, Guilhem and Laptev, Ivan and Schmid, Cordelia},
  booktitle = {International Conference on Computer Vision},
  year      = {2015},
  doi       = {10.1109/ICCV.2015.368},
  url       = {https://mlanthology.org/iccv/2015/cheron2015iccv-pcnn/}
}