Efficient, Self-Supervised Human Pose Estimation with Inductive Prior Tuning

Abstract

The goal of 2D human pose estimation (HPE) is to localize anatomical landmarks, given an image of a person in a pose. SOTA techniques make use of thousands of labeled figures (finetuning transformers or training deep CNNs), acquired using labor-intensive crowdsourcing. On the other hand, self-supervised methods re-frame the HPE task as a reconstruction problem, enabling them to leverage the vast amount of unlabeled visual data, though at the present cost of accuracy. In this work, we explore ways to improve self-supervised HPE. We (1) analyze the relationship between reconstruction quality and pose estimation accuracy, (2) develop a model pipeline that outperforms the baseline which inspired our work, using less than one-third the amount of training data, and (3) offer a new metric suitable for self-supervised settings that measures the consistency of predicted body part length proportions. We show that a combination of well-engineered reconstruction losses and inductive priors can help coordinate pose learning alongside reconstruction in a self-supervised paradigm.

Cite

Text

Yoo and Russakovsky. "Efficient, Self-Supervised Human Pose Estimation with Inductive Prior Tuning." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00351

Markdown

[Yoo and Russakovsky. "Efficient, Self-Supervised Human Pose Estimation with Inductive Prior Tuning." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/yoo2023iccvw-efficient/) doi:10.1109/ICCVW60793.2023.00351

BibTeX

@inproceedings{yoo2023iccvw-efficient,
  title     = {{Efficient, Self-Supervised Human Pose Estimation with Inductive Prior Tuning}},
  author    = {Yoo, Nobline and Russakovsky, Olga},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2023},
  pages     = {3263-3272},
  doi       = {10.1109/ICCVW60793.2023.00351},
  url       = {https://mlanthology.org/iccvw/2023/yoo2023iccvw-efficient/}
}