Learning to Generate Textures on 3D Meshes

Abstract

Recent years have seen a great deal of work in photorealistic neural image synthesis from 2D image datasets. However, there are only a few works that exploit 3D shape information to aid in image synthesis. To this end, we leverage data from 2D image datasets as well as 3D model corpora to generate textured 3D models. We propose a framework for texture generation for meshes from multiview images. Our framework first uses 2.5D information rendered using the 3D models, along with user inputs to generate an intermediate view dependent representation. These intermediate representations are then used to generate realistic textures for particular views in an unpaired manner. Finally, we use a differentiable renderer to combine the generated multiview texture into a single textured mesh. We demonstrate results of realistic texture synthesis on cars.

Cite

Text

Raj et al. "Learning to Generate Textures on 3D Meshes." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.

Markdown

[Raj et al. "Learning to Generate Textures on 3D Meshes." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/raj2019cvprw-learning/)

BibTeX

@inproceedings{raj2019cvprw-learning,
  title     = {{Learning to Generate Textures on 3D Meshes}},
  author    = {Raj, Amit and Ham, Cusuh and Barnes, Connelly and Kim, Vladimir G. and Lu, Jingwan and Hays, James},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {32-38},
  url       = {https://mlanthology.org/cvprw/2019/raj2019cvprw-learning/}
}