Semantics-Aware Test-Time Adaptation for 3D Human Pose Estimation

Abstract

This work highlights a semantics misalignment in 3D human pose estimation. For the task of test-time adaptation, the misalignment manifests as overly smoothed and unguided predictions. The smoothing settles predictions towards some average pose. Furthermore, when there are occlusions or truncations, the adaptation becomes fully unguided. To this end, we pioneer the integration of a semantics-aware motion prior for the test-time adaptation of 3D pose estimation. We leverage video understanding and a well-structured motion-text space to adapt the model motion prediction to adhere to video semantics during test time. Additionally, we incorporate a missing 2D pose completion based on the motion-text similarity. The pose completion strengthens the motion prior’s guidance for occlusions and truncations. Our method significantly improves state-of-the-art 3D human pose estimation TTA techniques, with more than 12% decrease in PA-MPJPE on 3DPW and 3DHP.

Cite

Text

Lin et al. "Semantics-Aware Test-Time Adaptation for 3D Human Pose Estimation." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Lin et al. "Semantics-Aware Test-Time Adaptation for 3D Human Pose Estimation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/lin2025icml-semanticsaware/)

BibTeX

@inproceedings{lin2025icml-semanticsaware,
  title     = {{Semantics-Aware Test-Time Adaptation for 3D Human Pose Estimation}},
  author    = {Lin, Qiuxia and Chen, Rongyu and Gu, Kerui and Yao, Angela},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {37780-37796},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/lin2025icml-semanticsaware/}
}