3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping
Abstract
We present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photorealistic images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Project page: https://3dhumangan.github.io/.
Cite
Text
Yang et al. "3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.02103Markdown
[Yang et al. "3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/yang2023iccv-3dhumangan/) doi:10.1109/ICCV51070.2023.02103BibTeX
@inproceedings{yang2023iccv-3dhumangan,
title = {{3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping}},
author = {Yang, Zhuoqian and Li, Shikai and Wu, Wayne and Dai, Bo},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {23008-23019},
doi = {10.1109/ICCV51070.2023.02103},
url = {https://mlanthology.org/iccv/2023/yang2023iccv-3dhumangan/}
}