Human Action Segmentation with Hierarchical Supervoxel Consistency

Abstract

Detailed analysis of human action, such as action classification, detection and localization has received increasing attention from the community; datasets like JHMDB have made it plausible to conduct studies analyzing the impact that such deeper information has on the greater action understanding problem. However, detailed automatic segmentation of human action has comparatively been unexplored. In this paper, we take a step in that direction and propose a hierarchical MRF model to bridge low-level video fragments with high-level human motion and appearance; novel higher-order potentials connect different levels of the supervoxel hierarchy to enforce the consistency of the human segmentation by pulling from different segment-scales. Our single layer model significantly outperforms the current state-of-the-art on actionness, and our full model improves upon the single layer baselines in action segmentation.

Cite

Text

Lu et al. "Human Action Segmentation with Hierarchical Supervoxel Consistency." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7299000

Markdown

[Lu et al. "Human Action Segmentation with Hierarchical Supervoxel Consistency." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/lu2015cvpr-human/) doi:10.1109/CVPR.2015.7299000

BibTeX

@inproceedings{lu2015cvpr-human,
  title     = {{Human Action Segmentation with Hierarchical Supervoxel Consistency}},
  author    = {Lu, Jiasen and Xu, Ran and Corso, Jason J.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2015},
  doi       = {10.1109/CVPR.2015.7299000},
  url       = {https://mlanthology.org/cvpr/2015/lu2015cvpr-human/}
}