Diffusion Models for Monocular Depth Estimation: Overcoming Challenging Conditions
Abstract
We present a novel approach designed to address the complexities posed by challenging, out-of-distribution data in the single-image depth estimation task. Starting with images that facilitate depth prediction due to the absence of unfavorable factors, we systematically generate new, user-defined scenes with a comprehensive set of challenges and associated depth information. This is achieved by leveraging cutting-edge text-to-image diffusion models with depth-aware control, known for synthesizing high-quality image content from textual prompts while preserving the coherence of 3D structure between generated and source imagery. Subsequent fine-tuning of any monocular depth network is carried out through a self-distillation protocol that takes into account images generated using our strategy and its own depth predictions on simple, unchallenging scenes. Experiments on benchmarks tailored for our purposes demonstrate the effectiveness and versatility of our proposal.
Cite
Text
Tosi et al. "Diffusion Models for Monocular Depth Estimation: Overcoming Challenging Conditions." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73337-6_14Markdown
[Tosi et al. "Diffusion Models for Monocular Depth Estimation: Overcoming Challenging Conditions." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/tosi2024eccv-diffusion/) doi:10.1007/978-3-031-73337-6_14BibTeX
@inproceedings{tosi2024eccv-diffusion,
title = {{Diffusion Models for Monocular Depth Estimation: Overcoming Challenging Conditions}},
author = {Tosi, Fabio and Ramirez, Pierluigi Zama and Poggi, Matteo},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73337-6_14},
url = {https://mlanthology.org/eccv/2024/tosi2024eccv-diffusion/}
}