Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model
Abstract
Text-to-3D with diffusion models has achieved remarkable progress in recent years. However, existing methods either rely on score distillation-based optimization which suffer from slow inference, low diversity and Janus problems, or are feed-forward methods that generate low-quality results due to the scarcity of 3D training data. In this paper, we propose Instant3D, a novel method that generates high-quality and diverse 3D assets from text prompts in a feed-forward manner. We adopt a two-stage paradigm, which first generates a sparse set of four structured and consistent views from text in one shot with a fine-tuned 2D text-to-image diffusion model, and then directly regresses the NeRF from the generated images with a novel transformer-based sparse-view reconstructor. Through extensive experiments, we demonstrate that our method can generate diverse 3D assets of high visual quality within 20 seconds, which is two orders of magnitude faster than previous optimization-based methods that can take 1 to 10 hours. Our project webpage is: https://jiahao.ai/instant3d/.
Cite
Text
Li et al. "Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model." International Conference on Learning Representations, 2024.Markdown
[Li et al. "Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/li2024iclr-instant3d/)BibTeX
@inproceedings{li2024iclr-instant3d,
title = {{Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model}},
author = {Li, Jiahao and Tan, Hao and Zhang, Kai and Xu, Zexiang and Luan, Fujun and Xu, Yinghao and Hong, Yicong and Sunkavalli, Kalyan and Shakhnarovich, Greg and Bi, Sai},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/li2024iclr-instant3d/}
}