Diverse Text-to-3D Synthesis with Augmented Text Embedding

Abstract

Text-to-3D synthesis has recently emerged as a new approach to sampling 3D models by adopting pretrained text-to-image models as guiding visual priors. An intriguing but underexplored problem with existing text-to-3D methods is that 3D models obtained from the sampling-by-optimization procedure tend to have mode collapses, and hence poor diversity in their results. In this paper, we provide an analysis and identify potential causes of such a limited diversity, which motivates us to devise a new method that considers the joint generation of different 3D models from the same text prompt. We propose to use augmented text prompts via textual inversion of reference images to diversify the joint generation. We show that our method leads to improved diversity in text-to-3D synthesis qualitatively and quantitatively. Project page: https://diversedream.github.io/

Cite

Text

Tran et al. "Diverse Text-to-3D Synthesis with Augmented Text Embedding." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73226-3_13

Markdown

[Tran et al. "Diverse Text-to-3D Synthesis with Augmented Text Embedding." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/tran2024eccv-diverse/) doi:10.1007/978-3-031-73226-3_13

BibTeX

@inproceedings{tran2024eccv-diverse,
  title     = {{Diverse Text-to-3D Synthesis with Augmented Text Embedding}},
  author    = {Tran, Uy Dieu and Luu, Minh N. Hoang and Nguyen, Phong Ha and Nguyen, Khoi and Hua, Binh-Son},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73226-3_13},
  url       = {https://mlanthology.org/eccv/2024/tran2024eccv-diverse/}
}