Dream-to-Recon: Monocular 3D Reconstruction with Diffusion-Depth Distillation from Single Images

Abstract

Volumetric scene reconstruction from a single image is crucial for a broad range of applications like autonomous driving and robotics. Recent volumetric reconstruction methods achieve impressive results, but generally require expensive 3D ground truth or multi-view supervision. We propose to leverage pre-trained 2D diffusion models and depth prediction models to generate synthetic scene geometry from a single image. This can then be used to distill a feed-forward scene reconstruction model. Our experiments on the challenging KITTI-360 and Waymo datasets demonstrate that our method matches or outperforms state-of-the-art baselines that use multi-view supervision, and offers unique advantages, for example regarding dynamic scenes.

Cite

Text

Wulff et al. "Dream-to-Recon: Monocular 3D Reconstruction with Diffusion-Depth Distillation from Single Images." International Conference on Computer Vision, 2025.

Markdown

[Wulff et al. "Dream-to-Recon: Monocular 3D Reconstruction with Diffusion-Depth Distillation from Single Images." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/wulff2025iccv-dreamtorecon/)

BibTeX

@inproceedings{wulff2025iccv-dreamtorecon,
  title     = {{Dream-to-Recon: Monocular 3D Reconstruction with Diffusion-Depth Distillation from Single Images}},
  author    = {Wulff, Philipp and Wimbauer, Felix and Muhle, Dominik and Cremers, Daniel},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {9352-9362},
  url       = {https://mlanthology.org/iccv/2025/wulff2025iccv-dreamtorecon/}
}