Sound Source Localization Is All About Cross-Modal Alignment
Abstract
Humans can easily perceive the direction of sound sources in a visual scene, termed sound source localization. Recent studies on learning-based sound source localization have mainly explored the problem from a localization perspective. However, prior arts and existing benchmarks do not account for a more important aspect of the problem, cross-modal semantic understanding, which is essential for genuine sound source localization. Cross-modal semantic understanding is important in understanding semantically mismatched audio-visual events, e.g., silent objects, or off-screen sounds. To account for this, we propose a cross-modal alignment task as a joint task with sound source localization to better learn the interaction between audio and visual modalities. Thereby, we achieve high localization performance with strong cross-modal semantic understanding. Our method outperforms the state-of-the-art approaches in both sound source localization and cross-modal retrieval. Our work suggests that jointly tackling both tasks is necessary to conquer genuine sound source localization.
Cite
Text
Senocak et al. "Sound Source Localization Is All About Cross-Modal Alignment." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00715Markdown
[Senocak et al. "Sound Source Localization Is All About Cross-Modal Alignment." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/senocak2023iccv-sound/) doi:10.1109/ICCV51070.2023.00715BibTeX
@inproceedings{senocak2023iccv-sound,
title = {{Sound Source Localization Is All About Cross-Modal Alignment}},
author = {Senocak, Arda and Ryu, Hyeonggon and Kim, Junsik and Oh, Tae-Hyun and Pfister, Hanspeter and Chung, Joon Son},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {7777-7787},
doi = {10.1109/ICCV51070.2023.00715},
url = {https://mlanthology.org/iccv/2023/senocak2023iccv-sound/}
}