NeRAF: 3D Scene Infused Neural Radiance and Acoustic Fields

Abstract

Sound plays a major role in human perception. Along with vision, it provides essential information for understanding our surroundings. Despite advances in neural implicit representations, learning acoustics that align with visual scenes remains a challenge. We propose NeRAF, a method that jointly learns acoustic and radiance fields. NeRAF synthesizes both novel views and spatialized room impulse responses (RIR) at new positions by conditioning the acoustic field on 3D scene geometric and appearance priors from the radiance field. The generated RIR can be applied to auralize any audio signal. Each modality can be rendered independently and at spatially distinct positions, offering greater versatility. We demonstrate that NeRAF generates high-quality audio on SoundSpaces and RAF datasets, achieving significant performance improvements over prior methods while being more data-efficient. Additionally, NeRAF enhances novel view synthesis of complex scenes trained with sparse data through cross-modal learning. NeRAF is designed as a Nerfstudio module, providing convenient access to realistic audio-visual generation. Project page: https://amandinebtto.github.io/NeRAF

Cite

Text

Brunetto et al. "NeRAF: 3D Scene Infused Neural Radiance and Acoustic Fields." International Conference on Learning Representations, 2025.

Markdown

[Brunetto et al. "NeRAF: 3D Scene Infused Neural Radiance and Acoustic Fields." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/brunetto2025iclr-neraf/)

BibTeX

@inproceedings{brunetto2025iclr-neraf,
  title     = {{NeRAF: 3D Scene Infused Neural Radiance and Acoustic Fields}},
  author    = {Brunetto, Amandine and Hornauer, Sascha and Moutarde, Fabien},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/brunetto2025iclr-neraf/}
}