Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction

Abstract

Novel view synthesis via Neural Radiance Fields (NeRFs) or 3D Gaussian Splatting (3DGS) typically necessitates dense observations with hundreds of input images to circumvent artifacts. We introduce Deceptive-NeRF/3DGS1 to enhance sparse-view reconstruction with only a limited set of input images, by leveraging a diffusion model pre-trained from multiview datasets. Different from using diffusion priors to regularize representation optimization, our method directly uses diffusion-generated images to train NeRF/3DGS as if they were real input views. Specifically, we propose a deceptive diffusion model turning noisy images rendered from few-view reconstructions into high-quality photorealistic pseudo-observations. To resolve consistency among pseudo-observations and real input views, we develop an uncertainty measure to guide the diffusion model’s generation. Our system progressively incorporates diffusion-generated pseudo-observations into the training image sets, ultimately densifying the sparse input observations by 5 to 10 times. Extensive experiments across diverse and challenging datasets validate that our approach outperforms existing state-of-the-art methods and is capable of synthesizing novel views with super-resolution in the few-view setting. Project page: https://xinhangliu.com/deceptive-nerf-3dgs. 1 In harmonic progression, a Deceptive Cadence may disrupt expectations of chord progression but enriches the emotional expression of the music. Our Deceptive-X, where “X” can be NeRF, 3DGS, or a pertinent 3D reconstruction framework—counters overfitting to sparse input views by densely synthesizing consistent pseudo-observations, enriching the original sparse inputs by fivefold to tenfold.

Cite

Text

Liu et al. "Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72640-8_19

Markdown

[Liu et al. "Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/liu2024eccv-diffusiongenerated/) doi:10.1007/978-3-031-72640-8_19

BibTeX

@inproceedings{liu2024eccv-diffusiongenerated,
  title     = {{Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction}},
  author    = {Liu, Xinhang and Chen, Jiaben and Kao, Shiu-Hong and Tai, Yu-Wing and Tang, Chi-Keung},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72640-8_19},
  url       = {https://mlanthology.org/eccv/2024/liu2024eccv-diffusiongenerated/}
}