Unveiling Visual Biases in Audio-Visual Localization Benchmarks
Abstract
Audio-Visual Source Localization (AVSL) aims to localize the source of sound within a video. In this paper, we identify a significant issue in existing benchmarks: the sounding objects are often easily recognized based solely on visual cues, which we refer to as visual bias. Such biases hinder these benchmarks from effectively evaluating AVSL models. To further validate our hypothesis regarding visual biases, we examine two representative AVSL benchmarks, VGG-SS and Epic-Sounding-Object, where the vision-only models outperform all audio-visual baselines. Our findings suggest that existing AVSL benchmarks need further refinement to facilitate audio-visual learning.
Cite
Text
Chen et al. "Unveiling Visual Biases in Audio-Visual Localization Benchmarks." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-93806-1_17Markdown
[Chen et al. "Unveiling Visual Biases in Audio-Visual Localization Benchmarks." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/chen2024eccvw-unveiling/) doi:10.1007/978-3-031-93806-1_17BibTeX
@inproceedings{chen2024eccvw-unveiling,
title = {{Unveiling Visual Biases in Audio-Visual Localization Benchmarks}},
author = {Chen, Liangyu and Yue, Zihao and Xu, Boshen and Jin, Qin},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {227-237},
doi = {10.1007/978-3-031-93806-1_17},
url = {https://mlanthology.org/eccvw/2024/chen2024eccvw-unveiling/}
}