Texture Generation on 3D Meshes with Point-UV Diffusion
Abstract
In this work, we focus on synthesizing high-quality textures on 3D meshes. We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate 3D consistent and high-quality texture images in UV space. We start with introducing a point diffusion model to synthesize low-frequency texture components with our tailored style guidance to tackle the biased color distribution. The derived coarse texture offers global consistency and serves as a condition for the subsequent UV diffusion stage, aiding in regularizing the model to generate a 3D consistent UV texture image. Then, a UV diffusion model with hybrid conditions is developed to enhance the texture fidelity in the 2D UV space. Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures.
Cite
Text
Yu et al. "Texture Generation on 3D Meshes with Point-UV Diffusion." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00388Markdown
[Yu et al. "Texture Generation on 3D Meshes with Point-UV Diffusion." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/yu2023iccv-texture/) doi:10.1109/ICCV51070.2023.00388BibTeX
@inproceedings{yu2023iccv-texture,
title = {{Texture Generation on 3D Meshes with Point-UV Diffusion}},
author = {Yu, Xin and Dai, Peng and Li, Wenbo and Ma, Lan and Liu, Zhengzhe and Qi, Xiaojuan},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {4206-4216},
doi = {10.1109/ICCV51070.2023.00388},
url = {https://mlanthology.org/iccv/2023/yu2023iccv-texture/}
}