Text2Mesh: Text-Driven Neural Stylization for Meshes

Abstract

In this work, we develop intuitive controls for editing the style of 3D objects. Our framework, Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt. We consider a disentangled representation of a 3D object using a fixed mesh input (content) coupled with a learned neural network, which we term a neural style field network (NSF). In order to modify style, we obtain a similarity score between a text prompt (describing style) and a stylized mesh by harnessing the representational power of CLIP. Text2Mesh requires neither a pre-trained generative model nor a specialized 3D mesh dataset. It can handle low-quality meshes (non-manifold, boundaries, etc.) with arbitrary genus, and does not require UV parameterization. We demonstrate the ability of our technique to synthesize a myriad of styles over a wide variety of 3D meshes. Our code and results are available in our project webpage: https://threedle.github.io/text2mesh/.

Cite

Text

Michel et al. "Text2Mesh: Text-Driven Neural Stylization for Meshes." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01313

Markdown

[Michel et al. "Text2Mesh: Text-Driven Neural Stylization for Meshes." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/michel2022cvpr-text2mesh/) doi:10.1109/CVPR52688.2022.01313

BibTeX

@inproceedings{michel2022cvpr-text2mesh,
  title     = {{Text2Mesh: Text-Driven Neural Stylization for Meshes}},
  author    = {Michel, Oscar and Bar-On, Roi and Liu, Richard and Benaim, Sagie and Hanocka, Rana},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {13492-13502},
  doi       = {10.1109/CVPR52688.2022.01313},
  url       = {https://mlanthology.org/cvpr/2022/michel2022cvpr-text2mesh/}
}