Multiple View Geometry Transformers for 3D Human Pose Estimation
Abstract
In this work we aim to improve the 3D reasoning ability of Transformers in multi-view 3D human pose estimation. Recent works have focused on end-to-end learning-based transformer designs which struggle to resolve geometric information accurately particularly during occlusion. Instead we propose a novel hybrid model MVGFormer which has a series of geometric and appearance modules organized in an iterative manner. The geometry modules are learning-free and handle all viewpoint-dependent 3D tasks geometrically which notably improves the model's generalization ability. The appearance modules are learnable and are dedicated to estimating 2D poses from image signals end-to-end which enables them to achieve accurate estimates even when occlusion occurs leading to a model that is both accurate and generalizable to new cameras and geometries. We evaluate our approach for both in-domain and out-of-domain settings where our model consistently outperforms state-of-the-art methods and especially does so by a significant margin in the out-of-domain setting. We will release the code and models: https://github.com/XunshanMan/MVGFormer.
Cite
Text
Liao et al. "Multiple View Geometry Transformers for 3D Human Pose Estimation." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00074Markdown
[Liao et al. "Multiple View Geometry Transformers for 3D Human Pose Estimation." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/liao2024cvpr-multiple/) doi:10.1109/CVPR52733.2024.00074BibTeX
@inproceedings{liao2024cvpr-multiple,
title = {{Multiple View Geometry Transformers for 3D Human Pose Estimation}},
author = {Liao, Ziwei and Zhu, Jialiang and Wang, Chunyu and Hu, Han and Waslander, Steven L.},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {708-717},
doi = {10.1109/CVPR52733.2024.00074},
url = {https://mlanthology.org/cvpr/2024/liao2024cvpr-multiple/}
}