Thin-Slicing Network: A Deep Structured Model for Pose Estimation in Videos
Abstract
Deep ConvNets have been shown to be effective for the task of human pose estimation from single images. However, several challenging issues arise in the video-based case such as self-occlusion, motion blur, and uncommon poses with few or no examples in the training data. Temporal information can provide additional cues about the location of body joints and help to alleviate these issues. In this paper, we propose a deep structured model to estimate a sequence of human poses in unconstrained videos. This model can be efficiently trained in an end-to-end manner and is capable of representing the appearance of body joints and their spatio-temporal relationships simultaneously. Domain knowledge about the human body is explicitly incorporated into the network providing effective priors to regularize the skeletal structure and to enforce temporal consistency. The proposed end-to-end architecture is evaluated on two widely used benchmarks for video-based pose estimation (Penn Action and JHMDB datasets). Our approach outperforms several state-of-the-art methods.
Cite
Text
Song et al. "Thin-Slicing Network: A Deep Structured Model for Pose Estimation in Videos." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.590Markdown
[Song et al. "Thin-Slicing Network: A Deep Structured Model for Pose Estimation in Videos." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/song2017cvpr-thinslicing/) doi:10.1109/CVPR.2017.590BibTeX
@inproceedings{song2017cvpr-thinslicing,
title = {{Thin-Slicing Network: A Deep Structured Model for Pose Estimation in Videos}},
author = {Song, Jie and Wang, Limin and Van Gool, Luc and Hilliges, Otmar},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.590},
url = {https://mlanthology.org/cvpr/2017/song2017cvpr-thinslicing/}
}