Iterative Action and Pose Recognition Using Global-and-Pose Features and Action-Specific Models

Abstract

This paper proposes an iterative scheme between human action classification and pose estimation in still images. For initial action classification, we employ global image features that represent a scene (e.g. people, background, and other objects), which can be extracted without any difficult human-region segmentation such as pose estimation. This classification gives us the probability estimates of possible actions in a query image. The probability estimates are used to evaluate the results of pose estimation using action-specific models. The estimated pose is then merged with the global features for action re-classification. This iterative scheme can mutually improve action classification and pose estimation. Experimental results with a public dataset demonstrate the effectiveness of global features for initialization, action-specific models for pose estimation, and action classification with global and pose features.

Cite

Text

Ukita. "Iterative Action and Pose Recognition Using Global-and-Pose Features and Action-Specific Models." IEEE/CVF International Conference on Computer Vision Workshops, 2013. doi:10.1109/ICCVW.2013.68

Markdown

[Ukita. "Iterative Action and Pose Recognition Using Global-and-Pose Features and Action-Specific Models." IEEE/CVF International Conference on Computer Vision Workshops, 2013.](https://mlanthology.org/iccvw/2013/ukita2013iccvw-iterative/) doi:10.1109/ICCVW.2013.68

BibTeX

@inproceedings{ukita2013iccvw-iterative,
  title     = {{Iterative Action and Pose Recognition Using Global-and-Pose Features and Action-Specific Models}},
  author    = {Ukita, Norimichi},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2013},
  pages     = {476-483},
  doi       = {10.1109/ICCVW.2013.68},
  url       = {https://mlanthology.org/iccvw/2013/ukita2013iccvw-iterative/}
}