OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration
Abstract
Depth completion (DC) aims to predict a dense depth map from an RGB image and a sparse depth map. Existing DC methods generalize poorly to new datasets or unseen sparse depth patterns, limiting their real-world applications. We propose OMNI-DC, a highly robust DC model that generalizes well zero-shot to various datasets. The key design is a novel Multi-Resolution Depth Integrator, allowing our model to deal with very sparse depth inputs. We also introduce a novel Laplacian loss to model the ambiguity in the training process. Moreover, we train OMNI-DC on a mixture of high-quality datasets with a scale normalization technique and synthetic depth patterns. Extensive experiments on 7 datasets show consistent improvements over baselines, reducing errors by as much as 43%. Codes and checkpoints are available at https://github.com/princeton-vl/OMNI-DC.
Cite
Text
Zuo et al. "OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration." International Conference on Computer Vision, 2025.Markdown
[Zuo et al. "OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/zuo2025iccv-omnidc/)BibTeX
@inproceedings{zuo2025iccv-omnidc,
title = {{OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration}},
author = {Zuo, Yiming and Yang, Willow and Ma, Zeyu and Deng, Jia},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {9287-9297},
url = {https://mlanthology.org/iccv/2025/zuo2025iccv-omnidc/}
}