Activity Recognition and Prediction with Pose Based Discriminative Patch Model

Abstract

We describe an image based activity recognition solution which can be applied to both off-line video classification and activity prediction in frames. We propose a Pose based Discriminative Patch Model to make activity recognition and prediction on image level (only observing several frames). This model enables a general and flexible framework to add in discriminative patches and consider their mutual relations to an efficient tree structure. PDP makes contribution in two aspects: (1) PDP provides a novel solution to improve activity recognition and prediction, by utilizing pose based discriminative patches instead of pose configuration feature, and modeling the patches' mutual relations. (2) PDP is an image-based algorithm, so it can make predictions using limited frames, even a single image. PDP focuses on challenging data captured from Internet and movies, where we achieve a 6% improvement compared with state-of-the-art method on video level recognition dataset - Sub-JHMDB, and image level action recognition dataset. We also obtain good improvement on activity prediction task.

Cite

Text

Cao et al. "Activity Recognition and Prediction with Pose Based Discriminative Patch Model." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016. doi:10.1109/WACV.2016.7477584

Markdown

[Cao et al. "Activity Recognition and Prediction with Pose Based Discriminative Patch Model." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016.](https://mlanthology.org/wacv/2016/cao2016wacv-activity/) doi:10.1109/WACV.2016.7477584

BibTeX

@inproceedings{cao2016wacv-activity,
  title     = {{Activity Recognition and Prediction with Pose Based Discriminative Patch Model}},
  author    = {Cao, Song and Chen, Kan and Nevatia, Ram},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2016},
  pages     = {1-9},
  doi       = {10.1109/WACV.2016.7477584},
  url       = {https://mlanthology.org/wacv/2016/cao2016wacv-activity/}
}