4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences
Abstract
We develop a computational tool that aligns motion capture (mocap) data to videos of 24-form simplified Taiji (TaiChi) Quan, a scripted motion sequence about 5 minutes long. With only prior knowledge that the subjects in video and mocap perform a similar pose sequence, we establish inter-subject temporal synchronization and spatial alignment of mocap and video based on body joint correspondences. Through time alignment and matching the viewpoint and orientation of the video camera, the 3D body joints from mocap data of subject A can be correctly projected onto the video performance of subject B. Initial quantitative evaluation of this alignment method shows promise in offering the first validated algorithmic treatment for cross-subject comparison of Taiji Quan performances. This work opens the door to subject-specific quantified comparison of long motion sequences beyond Taiji.
Cite
Text
Scott et al. "4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.99Markdown
[Scott et al. "4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/scott2017iccvw-4d/) doi:10.1109/ICCVW.2017.99BibTeX
@inproceedings{scott2017iccvw-4d,
title = {{4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences}},
author = {Scott, Jesse and Collins, Robert T. and Funk, Christopher and Liu, Yanxi},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2017},
pages = {795-804},
doi = {10.1109/ICCVW.2017.99},
url = {https://mlanthology.org/iccvw/2017/scott2017iccvw-4d/}
}