Neural Underwater Scene Representation

Abstract

Among the numerous efforts towards digitally recovering the physical world Neural Radiance Fields (NeRFs) have proved effective in most cases. However underwater scene introduces unique challenges due to the absorbing water medium the local change in lighting and the dynamic contents in the scene. We aim at developing a neural underwater scene representation for these challenges modeling the complex process of attenuation unstable in-scattering and moving objects during light transport. The proposed method can reconstruct the scenes from both established datasets and in-the-wild videos with outstanding fidelity.

Cite

Text

Tang et al. "Neural Underwater Scene Representation." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01119

Markdown

[Tang et al. "Neural Underwater Scene Representation." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/tang2024cvpr-neural/) doi:10.1109/CVPR52733.2024.01119

BibTeX

@inproceedings{tang2024cvpr-neural,
  title     = {{Neural Underwater Scene Representation}},
  author    = {Tang, Yunkai and Zhu, Chengxuan and Wan, Renjie and Xu, Chao and Shi, Boxin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {11780-11789},
  doi       = {10.1109/CVPR52733.2024.01119},
  url       = {https://mlanthology.org/cvpr/2024/tang2024cvpr-neural/}
}