View-Invariant Probabilistic Embedding for Human Pose

Abstract

Depictions of similar human body configurations can vary with changing viewpoints. Using only 2D information, we would like to enable vision algorithms to recognize similarity in human body poses across multiple views. This ability is useful for analyzing body movements and human behaviors in images and videos. In this paper, we propose an approach for learning a compact view-invariant embedding space from 2D joint keypoints alone, without explicitly predicting 3D poses. Since 2D poses are projected from 3D space, they have an inherent ambiguity, which is difficult to represent through a deterministic mapping. Hence, we use probabilistic embeddings to model this input uncertainty. Experimental results show that our embedding model achieves higher accuracy when retrieving similar poses across different camera views, in comparison with 2D-to-3D pose lifting models. We also demonstrate the effectiveness of applying our embeddings to view-invariant action recognition and video alignment. Our code will be released for research.

Cite

Text

Sun et al. "View-Invariant Probabilistic Embedding for Human Pose." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58558-7_4

Markdown

[Sun et al. "View-Invariant Probabilistic Embedding for Human Pose." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/sun2020eccv-viewinvariant/) doi:10.1007/978-3-030-58558-7_4

BibTeX

@inproceedings{sun2020eccv-viewinvariant,
  title     = {{View-Invariant Probabilistic Embedding for Human Pose}},
  author    = {Sun, Jennifer J. and Zhao, Jiaping and Chen, Liang-Chieh and Schroff, Florian and Adam, Hartwig and Liu, Ting},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58558-7_4},
  url       = {https://mlanthology.org/eccv/2020/sun2020eccv-viewinvariant/}
}