Relative Illumination Fields: Learning Medium and Light Independent Underwater Scenes
Abstract
We address the challenge of constructing a consistent and photorealistic Neural Radiance Field in inhomogeneously illuminated, scattering environments with unknown, co-moving light sources. While most existing works on underwater scene representation focus on a static homogeneous illumination, limited attention has been paid to scenarios such as when a robot explores water deeper than a few tens of meters, where sunlight becomes insufficient. To address this, we propose a novel illumination field locally attached to the camera, enabling the capture of uneven lighting effects within the viewing frustum. We combine this with a volumetric medium representation to an overall method that effectively handles interaction between dynamic illumination field and static scattering medium. Evaluation results demonstrate the effectiveness and flexibility of our approach.
Cite
Text
She et al. "Relative Illumination Fields: Learning Medium and Light Independent Underwater Scenes." International Conference on Computer Vision, 2025.Markdown
[She et al. "Relative Illumination Fields: Learning Medium and Light Independent Underwater Scenes." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/she2025iccv-relative/)BibTeX
@inproceedings{she2025iccv-relative,
title = {{Relative Illumination Fields: Learning Medium and Light Independent Underwater Scenes}},
author = {She, Mengkun and Seegräber, Felix and Nakath, David and Schöntag, Patricia and Köser, Kevin},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {29110-29119},
url = {https://mlanthology.org/iccv/2025/she2025iccv-relative/}
}