Novel-View Acoustic Synthesis

Abstract

We introduce the novel-view acoustic synthesis (NVAS) task: given the sight and sound observed at a source viewpoint, can we synthesize the sound of that scene from an unseen target viewpoint? We propose a neural rendering approach: Visually-Guided Acoustic Synthesis (ViGAS) network that learns to synthesize the sound of an arbitrary point in space by analyzing the input audio-visual cues. To benchmark this task, we collect two first-of-their-kind large-scale multi-view audio-visual datasets, one synthetic and one real. We show that our model successfully reasons about the spatial cues and synthesizes faithful audio on both datasets. To our knowledge, this work represents the very first formulation, dataset, and approach to solve the novel-view acoustic synthesis task, which has exciting potential applications ranging from AR/VR to art and design. Unlocked by this work, we believe that the future of novel-view synthesis is in multi-modal learning from videos.

Cite

Text

Chen et al. "Novel-View Acoustic Synthesis." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00620

Markdown

[Chen et al. "Novel-View Acoustic Synthesis." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/chen2023cvpr-novelview/) doi:10.1109/CVPR52729.2023.00620

BibTeX

@inproceedings{chen2023cvpr-novelview,
  title     = {{Novel-View Acoustic Synthesis}},
  author    = {Chen, Changan and Richard, Alexander and Shapovalov, Roman and Ithapu, Vamsi Krishna and Neverova, Natalia and Grauman, Kristen and Vedaldi, Andrea},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {6409-6419},
  doi       = {10.1109/CVPR52729.2023.00620},
  url       = {https://mlanthology.org/cvpr/2023/chen2023cvpr-novelview/}
}