Predicting Camera Viewpoint Improves Cross-Dataset Generalization for 3D Human Pose Estimation
Abstract
Monocular estimation of 3d human pose has attracted increased attention with the availability of large ground-truth motion capture datasets. However, the diversity of training data available is limited and it is not clear to what extent methods generalize outside the specific datasets they are trained on. In this work we carry out a systematic study of the diversity and biases present in specific datasets and its effect on cross-dataset generalization across a compendium of 5 pose datasets. We specifically focus on systematic differences in the distribution of camera viewpoints relative to a body-centered coordinate frame. Based on this observation, we propose an auxiliary task of predicting the camera viewpoint in addition to pose. We find that models trained to jointly predict viewpoint and pose systematically show significantly improved cross-dataset generalization.
Cite
Text
Wang et al. "Predicting Camera Viewpoint Improves Cross-Dataset Generalization for 3D Human Pose Estimation." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66096-3_36Markdown
[Wang et al. "Predicting Camera Viewpoint Improves Cross-Dataset Generalization for 3D Human Pose Estimation." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/wang2020eccvw-predicting/) doi:10.1007/978-3-030-66096-3_36BibTeX
@inproceedings{wang2020eccvw-predicting,
title = {{Predicting Camera Viewpoint Improves Cross-Dataset Generalization for 3D Human Pose Estimation}},
author = {Wang, Zhe and Shin, Daeyun and Fowlkes, Charless C.},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {523-540},
doi = {10.1007/978-3-030-66096-3_36},
url = {https://mlanthology.org/eccvw/2020/wang2020eccvw-predicting/}
}