Few-Shot Audio-Visual Learning of Environment Acoustics

Abstract

Room impulse response (RIR) functions capture how the surrounding physical environment transforms the sounds heard by a listener, with implications for various applications in AR, VR, and robotics. Whereas traditional methods to estimate RIRs assume dense geometry and/or sound measurements throughout the environment, we explore how to infer RIRs based on a sparse set of images and echoes observed in the space. Towards that goal, we introduce a transformer-based method that uses self-attention to build a rich acoustic context, then predicts RIRs of arbitrary query source-receiver locations through cross-attention. Additionally, we design a novel training objective that improves the match in the acoustic signature between the RIR predictions and the targets. In experiments using a state-of-the-art audio-visual simulator for 3D environments, we demonstrate that our method successfully generates arbitrary RIRs, outperforming state-of-the-art methods and---in a major departure from traditional methods---generalizing to novel environments in a few-shot manner. Project: http://vision.cs.utexas.edu/projects/fs_rir

Cite

Text

Majumder et al. "Few-Shot Audio-Visual Learning of Environment Acoustics." Neural Information Processing Systems, 2022.

Markdown

[Majumder et al. "Few-Shot Audio-Visual Learning of Environment Acoustics." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/majumder2022neurips-fewshot/)

BibTeX

@inproceedings{majumder2022neurips-fewshot,
  title     = {{Few-Shot Audio-Visual Learning of Environment Acoustics}},
  author    = {Majumder, Sagnik and Chen, Changan and Al-Halah, Ziad and Grauman, Kristen},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/majumder2022neurips-fewshot/}
}