Key Frame Proposal Network for Efficient Pose Estimation in Videos
Abstract
Human pose estimation in video relies on local information by either estimating each frame independently or tracking poses across frames. In this paper, we propose a novel method combining local approaches with global context. We introduce a light weighted, unsupervised, key-frame proposal network (K-FPN) to select informative frames and a learned dictionary to recover the entire pose sequence from these frames. The K-FPN speeds up the pose estimation and provides robustness to bad frames with occlusion, motion blur, and illumination changes, while the learned dictionary provides global dynamic context. Experiments on Penn Action and sub-JHMDB datasets show that the proposed method achieves state-of-the-art accuracy, with substantial speed-up.
Cite
Text
Zhang et al. "Key Frame Proposal Network for Efficient Pose Estimation in Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58520-4_36Markdown
[Zhang et al. "Key Frame Proposal Network for Efficient Pose Estimation in Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/zhang2020eccv-key/) doi:10.1007/978-3-030-58520-4_36BibTeX
@inproceedings{zhang2020eccv-key,
title = {{Key Frame Proposal Network for Efficient Pose Estimation in Videos}},
author = {Zhang, Yuexi and Wang, Yin and Camps, Octavia and Sznaier, Mario},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58520-4_36},
url = {https://mlanthology.org/eccv/2020/zhang2020eccv-key/}
}