Contact and Human Dynamics from Monocular Video

Abstract

Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and bodies leaning at extreme angles. In this paper, we present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input. We first estimate ground contact timings with a novel prediction network which is trained without hand-labeled data. A physics-based trajectory optimization then solves for a physically-plausible motion, based on the inputs. We show this process produces motions that are significantly more realistic than those from purely kinematic methods, substantially improving quantitative measures of both kinematic and dynamic plausibility. We demonstrate our method on character animation and pose estimation tasks on dynamic motions of dancing and sports with complex contact patterns.

Cite

Text

Rempe et al. "Contact and Human Dynamics from Monocular Video." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58558-7_5

Markdown

[Rempe et al. "Contact and Human Dynamics from Monocular Video." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/rempe2020eccv-contact/) doi:10.1007/978-3-030-58558-7_5

BibTeX

@inproceedings{rempe2020eccv-contact,
  title     = {{Contact and Human Dynamics from Monocular Video}},
  author    = {Rempe, Davis and Guibas, Leonidas J. and Hertzmann, Aaron and Russell, Bryan and Villegas, Ruben and Yang, Jimei},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58558-7_5},
  url       = {https://mlanthology.org/eccv/2020/rempe2020eccv-contact/}
}