IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation
Abstract
Most text-to-3D generators build upon off-the-shelf text-to-image models trained on billions of images. They use variants of Score Distillation Sampling (SDS), which is slow, somewhat unstable, and prone to artifacts. A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly. In this paper, we further explore the design space of text-to-3D models. We significantly improve multi-view generation by considering video instead of image generators. Combined with a 3D reconstruction algorithm which, by using Gaussian splatting, can optimize a robust image-based loss, we directly produce high-quality 3D outputs from the generated views. Our new method, IM-3D, reduces the number of evaluations of the 2D generator network 10-100$\times$, resulting in a much more efficient pipeline, better quality, fewer geometric inconsistencies, and higher yield of usable 3D assets.
Cite
Text
Melas-Kyriazi et al. "IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation." International Conference on Machine Learning, 2024.Markdown
[Melas-Kyriazi et al. "IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/melaskyriazi2024icml-im3d/)BibTeX
@inproceedings{melaskyriazi2024icml-im3d,
title = {{IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation}},
author = {Melas-Kyriazi, Luke and Laina, Iro and Rupprecht, Christian and Neverova, Natalia and Vedaldi, Andrea and Gafni, Oran and Kokkinos, Filippos},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {35310-35323},
volume = {235},
url = {https://mlanthology.org/icml/2024/melaskyriazi2024icml-im3d/}
}