Deformable Mesh Transformer for 3D Human Mesh Recovery

Abstract

We present Deformable mesh transFormer (DeFormer), a novel vertex-based approach to monocular 3D human mesh recovery. DeFormer iteratively fits a body mesh model to an input image via a mesh alignment feedback loop formed within a transformer decoder that is equipped with efficient body mesh driven attention modules: 1) body sparse self-attention and 2) deformable mesh cross attention. As a result, DeFormer can effectively exploit high-resolution image feature maps and a dense mesh model which were computationally expensive to deal with in previous approaches using the standard transformer attention. Experimental results show that DeFormer achieves state-of-the-art performances on the Human3.6M and 3DPW benchmarks. Ablation study is also conducted to show the effectiveness of the DeFormer model designs for leveraging multi-scale feature maps. Code is available at https://github.com/yusukey03012/DeFormer.

Cite

Text

Yoshiyasu. "Deformable Mesh Transformer for 3D Human Mesh Recovery." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01631

Markdown

[Yoshiyasu. "Deformable Mesh Transformer for 3D Human Mesh Recovery." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/yoshiyasu2023cvpr-deformable/) doi:10.1109/CVPR52729.2023.01631

BibTeX

@inproceedings{yoshiyasu2023cvpr-deformable,
  title     = {{Deformable Mesh Transformer for 3D Human Mesh Recovery}},
  author    = {Yoshiyasu, Yusuke},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {17006-17015},
  doi       = {10.1109/CVPR52729.2023.01631},
  url       = {https://mlanthology.org/cvpr/2023/yoshiyasu2023cvpr-deformable/}
}