MDP-Omni: Parameter-Free Multimodal Depth Prior-Based Sampling for Omnidirectional Stereo Matching
Abstract
Omnidirectional stereo matching (OSM) estimates 360deg depth by performing stereo matching on multi-view fisheye images. Existing methods assume a unimodal depth distribution, matching each pixel to a single object. However, this assumption constrains the sampling range, causing over-smoothed depth artifacts, especially at object boundaries. To address these limitations, we propose MDP-Omni, a novel OSM network that leverages parameter-free multimodal depth priors. Specifically, we design a sampling strategy that adaptively adjusts the sampling range based on a multimodal probability distribution, without introducing any additional parameters. Furthermore, we present the azimuth-based multi-view volume fusion module to build a single cost volume. It mitigates false matches caused by occlusions in warped multi-view volumes. Experimental results demonstrate that MDP-Omni significantly improves existing methods, particularly in capturing fine details.
Cite
Text
Son et al. "MDP-Omni: Parameter-Free Multimodal Depth Prior-Based Sampling for Omnidirectional Stereo Matching." International Conference on Computer Vision, 2025.Markdown
[Son et al. "MDP-Omni: Parameter-Free Multimodal Depth Prior-Based Sampling for Omnidirectional Stereo Matching." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/son2025iccv-mdpomni/)BibTeX
@inproceedings{son2025iccv-mdpomni,
title = {{MDP-Omni: Parameter-Free Multimodal Depth Prior-Based Sampling for Omnidirectional Stereo Matching}},
author = {Son, Eunjin and Jo, HyungGi and Kwon, Wookyong and Lee, Sang Jun},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {26178-26187},
url = {https://mlanthology.org/iccv/2025/son2025iccv-mdpomni/}
}