ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics

Abstract

Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To solve this problem, we propose a novel neural architecture that combines residual connections with prototype encoding of a partially specified pose to create a new complete pose from the learned latent space. We show that our architecture outperforms a baseline based on Transformer, both in terms of accuracy and computational efficiency. Additionally, we develop a user interface to integrate our neural model in Unity, a real-time 3D development platform. Furthermore, we introduce two new datasets representing the static human pose modeling problem, based on high-quality human motion capture data, which will be released publicly along with model code.

Cite

Text

Oreshkin et al. "ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics." International Conference on Learning Representations, 2022.

Markdown

[Oreshkin et al. "ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/oreshkin2022iclr-protores/)

BibTeX

@inproceedings{oreshkin2022iclr-protores,
  title     = {{ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics}},
  author    = {Oreshkin, Boris N. and Bocquelet, Florent and Harvey, Felix G. and Raitt, Bay and Laflamme, Dominic},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/oreshkin2022iclr-protores/}
}