MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View

Abstract

Generating consistent multiple views for 3D reconstruction tasks is still a challenge to existing image-to-3D diffusion models. Generally, incorporating 3D representations into diffusion model decrease the model’s speed as well as generalizability and quality. This paper proposes a general framework to generate consistent multi-view images from single image or leveraging scene representation transformer and view-conditioned diffusion model. In the model, we introduce epipolar geometry constraints and multi-view attention to enforce 3D consistency. From as few as one image input, our model is able to generate 3D meshes surpassing baselines methods in evaluation metrics, including PSNR, SSIM and LPIPS.

Cite

Text

Bourigault and Bourigault. "MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00753

Markdown

[Bourigault and Bourigault. "MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/bourigault2024cvprw-mvdiff/) doi:10.1109/CVPRW63382.2024.00753

BibTeX

@inproceedings{bourigault2024cvprw-mvdiff,
  title     = {{MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View}},
  author    = {Bourigault, Emmanuelle and Bourigault, Pauline},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {7579-7586},
  doi       = {10.1109/CVPRW63382.2024.00753},
  url       = {https://mlanthology.org/cvprw/2024/bourigault2024cvprw-mvdiff/}
}