Unsupervised Video Adaptation for Parsing Human Motion

Abstract

In this paper, we propose a method to parse human motion in unconstrained Internet videos without labeling any videos for training. We use the training samples from a public image pose dataset to avoid the tediousness of labeling video streams. There are two main problems confronted. First, the distribution of images and videos are different. Second, no temporal information is available in the training images. To smooth the inconsistency between the labeled images and unlabeled videos, our algorithm iteratively incorporates the pose knowledge harvested from the testing videos into the image pose detector via an adjust-and-refine method. During this process, continuity and tracking constraints are imposed to leverage the spatio-temporal information only available in videos. For our experiments, we have collected two datasets from YouTube and experiments show that our method achieves good performance for parsing human motions. Furthermore, we found that our method achieves better performance by using unlabeled video than adding more labeled pose images into the training set.

Cite

Text

Shen et al. "Unsupervised Video Adaptation for Parsing Human Motion." European Conference on Computer Vision, 2014. doi:10.1007/978-3-319-10602-1_23

Markdown

[Shen et al. "Unsupervised Video Adaptation for Parsing Human Motion." European Conference on Computer Vision, 2014.](https://mlanthology.org/eccv/2014/shen2014eccv-unsupervised/) doi:10.1007/978-3-319-10602-1_23

BibTeX

@inproceedings{shen2014eccv-unsupervised,
  title     = {{Unsupervised Video Adaptation for Parsing Human Motion}},
  author    = {Shen, Haoquan and Yu, Shoou-I and Yang, Yi and Meng, Deyu and Hauptmann, Alexander G.},
  booktitle = {European Conference on Computer Vision},
  year      = {2014},
  pages     = {347-360},
  doi       = {10.1007/978-3-319-10602-1_23},
  url       = {https://mlanthology.org/eccv/2014/shen2014eccv-unsupervised/}
}