DeciWatch: A Simple Baseline for 10× Efficient 2D and 3D Pose Estimation
Abstract
This paper proposes a simple baseline framework for video-based 2D/3D human pose estimation that can achieve 10 times efficiency improvement over existing works without any performance degradation, named DeciWatch. Unlike current solutions that estimate each frame in a video, DeciWatch introduces a simple yet effective sample-denoise-recover framework that only watches sparsely sampled frames, taking advantage of the continuity of human motions and the lightweight pose representation. Specifically, DeciWatch uniformly samples less than 10% video frames for detailed estimation, denoises the estimated 2D/3D poses with an efficient Transformer architecture, and then accurately recovers the rest of the frames using another Transformer-based network. Comprehensive experimental results on three video-based human pose estimation and body mesh recovery tasks with four datasets validate the efficiency and effectiveness of DeciWatch. Code is available at https://github.com/cure-lab/DeciWatch.
Cite
Text
Zeng et al. "DeciWatch: A Simple Baseline for 10× Efficient 2D and 3D Pose Estimation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20065-6_35Markdown
[Zeng et al. "DeciWatch: A Simple Baseline for 10× Efficient 2D and 3D Pose Estimation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/zeng2022eccv-deciwatch/) doi:10.1007/978-3-031-20065-6_35BibTeX
@inproceedings{zeng2022eccv-deciwatch,
title = {{DeciWatch: A Simple Baseline for 10× Efficient 2D and 3D Pose Estimation}},
author = {Zeng, Ailing and Ju, Xuan and Yang, Lei and Gao, Ruiyuan and Zhu, Xizhou and Dai, Bo and Xu, Qiang},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20065-6_35},
url = {https://mlanthology.org/eccv/2022/zeng2022eccv-deciwatch/}
}