LATTE3D: Large-Scale Amortized Text-to-Enhanced3D Synthesis
Abstract
Recent text-to-3D generation approaches produce impressive 3D results but require time-consuming optimization that can take up to an hour per prompt. Amortized methods like ATT3D optimize multiple prompts simultaneously to improve efficiency, enabling fast text-to-3D synthesis. However, they cannot capture high-frequency geometry and texture details and struggle to scale to large prompt sets, so they generalize poorly. We introduce , addressing these limitations to achieve fast, high-quality generation on a significantly larger prompt set. Key to our method is 1) building a scalable architecture and 2) leveraging 3D data during optimization through 3D-aware diffusion priors, shape regularization, and model initialization to achieve robustness to diverse and complex training prompts. amortizes both neural field and textured surface generation to produce highly detailed textured meshes in a single forward pass. generates 3D objects in , and can be further enhanced with fast test-time optimization.
Cite
Text
Xie et al. "LATTE3D: Large-Scale Amortized Text-to-Enhanced3D Synthesis." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72980-5_18Markdown
[Xie et al. "LATTE3D: Large-Scale Amortized Text-to-Enhanced3D Synthesis." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/xie2024eccv-latte3d/) doi:10.1007/978-3-031-72980-5_18BibTeX
@inproceedings{xie2024eccv-latte3d,
title = {{LATTE3D: Large-Scale Amortized Text-to-Enhanced3D Synthesis}},
author = {Xie, Kevin and Cao, Tianshi and Lorraine, Jonathan P and Gao, Jun and Lucas, James R and Torralba, Antonio and Fidler, Sanja and Zeng, Xiaohui},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72980-5_18},
url = {https://mlanthology.org/eccv/2024/xie2024eccv-latte3d/}
}