Active Audio-Visual Separation of Dynamic Sound Sources
Abstract
We explore active audio-visual separation for dynamic sound sources, where an embodied agent moves intelligently in a 3D environment to continuously isolate the time-varying audio stream being emitted by an object of interest. The agent hears a mixed stream of multiple audio sources (e.g., multiple people conversing and a band playing music at a noisy party). Given a limited time budget, it needs to extract the target sound accurately at every step using egocentric audio-visual observations. We propose a reinforcement learning agent equipped with a novel transformer memory that learns motion policies to control its camera and microphone to recover the dynamic target audio, using self-attention to make high-quality estimates for current timesteps and also simultaneously improve its past estimates. Using highly realistic acoustic SoundSpaces simulations in real-world scanned Matterport3D environments, we show that our model is able to learn efficient behavior to carry out continuous separation of a dynamic audio target. Project: https://vision.cs.utexas.edu/projects/active-av-dynamic-separation/.
Cite
Text
Majumder and Grauman. "Active Audio-Visual Separation of Dynamic Sound Sources." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19842-7_32Markdown
[Majumder and Grauman. "Active Audio-Visual Separation of Dynamic Sound Sources." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/majumder2022eccv-active/) doi:10.1007/978-3-031-19842-7_32BibTeX
@inproceedings{majumder2022eccv-active,
title = {{Active Audio-Visual Separation of Dynamic Sound Sources}},
author = {Majumder, Sagnik and Grauman, Kristen},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19842-7_32},
url = {https://mlanthology.org/eccv/2022/majumder2022eccv-active/}
}