Mesh Graphormer
Abstract
We present a graph-convolution-reinforced transformer, named Mesh Graphormer, for 3D human pose and mesh reconstruction from a single image. Recently both transformers and graph convolutional neural networks (GCNNs) have shown promising progress in human mesh reconstruction. Transformer-based approaches are effective in modeling non-local interactions among 3D mesh vertices and body joints, whereas GCNNs are good at exploiting neighborhood vertex interactions based on a pre-specified mesh topology. In this paper, we study how to combine graph convolutions and self-attentions in a transformer to model both local and global interactions. Experimental results show that our proposed method, Mesh Graphormer, significantly outperforms the previous state-of-the-art methods on multiple benchmarks, including Human3.6M, 3DPW, and FreiHAND datasets. Code and pre-trained models are available at https://github.com/microsoft/MeshGraphormer
Cite
Text
Lin et al. "Mesh Graphormer." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01270Markdown
[Lin et al. "Mesh Graphormer." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/lin2021iccv-mesh/) doi:10.1109/ICCV48922.2021.01270BibTeX
@inproceedings{lin2021iccv-mesh,
title = {{Mesh Graphormer}},
author = {Lin, Kevin and Wang, Lijuan and Liu, Zicheng},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {12939-12948},
doi = {10.1109/ICCV48922.2021.01270},
url = {https://mlanthology.org/iccv/2021/lin2021iccv-mesh/}
}