Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video

Abstract

Despite the recent success of single image-based 3D human pose and shape estimation methods, recovering temporally consistent and smooth 3D human motion from a video is still challenging. Several video-based methods have been proposed; however, they fail to resolve the single image-based methods' temporal inconsistency issue due to a strong dependency on a static feature of the current frame. In this regard, we present a temporally consistent mesh recovery system (TCMR). It effectively focuses on the past and future frames' temporal information without being dominated by the current static feature. Our TCMR significantly outperforms previous video-based methods in temporal consistency with better per-frame 3D pose and shape accuracy. We also release the codes.

Cite

Text

Choi et al. "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00200

Markdown

[Choi et al. "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/choi2021cvpr-beyond/) doi:10.1109/CVPR46437.2021.00200

BibTeX

@inproceedings{choi2021cvpr-beyond,
  title     = {{Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video}},
  author    = {Choi, Hongsuk and Moon, Gyeongsik and Chang, Ju Yong and Lee, Kyoung Mu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {1964-1973},
  doi       = {10.1109/CVPR46437.2021.00200},
  url       = {https://mlanthology.org/cvpr/2021/choi2021cvpr-beyond/}
}