Refining Joint Locations for Human Pose Tracking in Sports Videos

Abstract

The estimation of an athlete's pose in video footage enables the automation of athletic performance assessment, the prediction of motion kinematics and dynamics in sports videos and the possibility of technology-assisted, direct training feedback. Despite remarkable progress in the field of deep learning assisted human pose estimation, the performance of such systems decreases while noise and errors increase with the complexity of the scene. In this paper, we focus on aquatic training scenarios, where even novel pose estimators produce several types of orthogonal errors, including joint swaps and prediction outliers. In order to improve the estimation of an athlete's pose in swimming, we propose a graph partitioning problem that connects pose estimates over time and explicitly allows for joints to switch labels if their location better fits each other's trajectory. We optimize the problem using integer linear programming, which partitions the graph into the most probable joint trajectories. We show experimentally that our method of joint rectification improves the joint detection precision of swimmers in a swimming channel by 0.8%–4.8% PCK for anti-symmetrical motion and up to 1.8% PCK for symmetrical styles.

Cite

Text

Zecha et al. "Refining Joint Locations for Human Pose Tracking in Sports Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00308

Markdown

[Zecha et al. "Refining Joint Locations for Human Pose Tracking in Sports Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/zecha2019cvprw-refining/) doi:10.1109/CVPRW.2019.00308

BibTeX

@inproceedings{zecha2019cvprw-refining,
  title     = {{Refining Joint Locations for Human Pose Tracking in Sports Videos}},
  author    = {Zecha, Dan and Einfalt, Moritz and Lienhart, Rainer},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {2524-2532},
  doi       = {10.1109/CVPRW.2019.00308},
  url       = {https://mlanthology.org/cvprw/2019/zecha2019cvprw-refining/}
}