WarpHE4D: Dense 4D Head mAP Toward Full Head Reconstruction
Abstract
We address the 3D head reconstruction problem and the facial correspondence search problem in a unified framework, named as WarpHE4D. The underlying idea is to establish correspondences between the facial image and the fixed UV texture map by exploiting powerful self-supervised visual representations, i.e., DINOv2. In other words, we predict UV coordinates for each pixel that maps the pixel to a point in the UV map. At the same time, we predict the nose-centered depth map leveraged by the facial correspondences. Note that our framework does not require fitting a template model, \text e.g., 3DMM, to the image, which directly regresses 4D vectors for each pixel. The experimental results show that our approach not only improves the accuracy of head geometry but also significantly improves the robustness under pose or viewpoint variations, particularly when the head is rotated more than 90 degrees. We believe our method can be a groundwork for photorealistic head avatar generation, even in uncalibrated camera settings.
Cite
Text
Yun et al. "WarpHE4D: Dense 4D Head mAP Toward Full Head Reconstruction." International Conference on Computer Vision, 2025.Markdown
[Yun et al. "WarpHE4D: Dense 4D Head mAP Toward Full Head Reconstruction." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/yun2025iccv-warphe4d/)BibTeX
@inproceedings{yun2025iccv-warphe4d,
title = {{WarpHE4D: Dense 4D Head mAP Toward Full Head Reconstruction}},
author = {Yun, Jongseob and Kwon, Yong-Hoon and Park, Min-Gyu and Kang, Ju-Mi and Lee, Min-Ho and Chang, Inho and Yoon, Ju Hong and Yoon, Kuk-Jin},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {11480-11490},
url = {https://mlanthology.org/iccv/2025/yun2025iccv-warphe4d/}
}