Underwater Visual SLAM with Depth Uncertainty and Medium Modeling

Abstract

Underwater visual simultaneous localization and mapping (SLAM) faces critical challenges in light attenuation and degraded geometric consistency. Despite recent advances of visual SLAM in indoor and urban scenes, these approaches typically assume a clear medium and neglect medium-light interactions, leading to performance degradation in underwater environments. To overcome these limitations, we propose DUV-SLAM, a dense underwater visual SLAM framework that integrates uncertainty-aware geometry estimation with physics-inspired neural scattering modeling. Our method introduces two core innovations: i) depth uncertainty quantification derived from differentiable bundle adjustment, which propagates geometric confidence to guide mapping optimization; and ii) a neural-Gaussian hybrid representation that combines adaptive 3D Gaussians for underwater reconstruction with a neural field capturing wavelength-dependent medium properties, optimized using a combination of photometric, geometric, and distribution losses. Experiments on synthetic and real-world datasets demonstrate that DUV-SLAM achieves high-quality monocular reconstruction while maintaining real-time efficiency and robust tracking accuracy.

Cite

Text

Liu et al. "Underwater Visual SLAM with Depth Uncertainty and Medium Modeling." International Conference on Computer Vision, 2025.

Markdown

[Liu et al. "Underwater Visual SLAM with Depth Uncertainty and Medium Modeling." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/liu2025iccv-underwater/)

BibTeX

@inproceedings{liu2025iccv-underwater,
  title     = {{Underwater Visual SLAM with Depth Uncertainty and Medium Modeling}},
  author    = {Liu, Rui and Fan, Sheng and Wang, Wenguan and Yang, Yi},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {970-980},
  url       = {https://mlanthology.org/iccv/2025/liu2025iccv-underwater/}
}