Towards Open-Vocabulary Audio-Visual Event Localization

Abstract

The Audio-Visual Event Localization (AVEL) task aims to temporally locate and classify video events that are both audible and visible.Most research in this field assumes a closed-set setting, which restricts these models' ability to handle test data containing event categories absent (unseen) during training. Recently, a few studies have explored AVEL in an open-set setting, enabling the recognition of unseen events as "unknown", but without providing category-specific semantics.In this paper, we advance the field by introducing the Open-Vocabulary Audio-Visual Event Localization (OV-AVEL) problem, which requires localizing audio-visual events and predicting explicit categories for both seen and unseen data at inference.To address this new task, we propose the OV-AVEBench dataset, comprising 24,800 videos across 67 real-life audio-visual scenes (seen:unseen = 46:21), each with manual segment-level annotation.We also establish three evaluation metrics for this task.Moreover, we investigate two baseline approaches, one training-free and one using a further fine-tuning paradigm.Specifically, we utilize the unified multimodal space from the pretrained ImageBind model to extract audio, visual, and textual (event classes) features.The training-free baseline then determines predictions by comparing the consistency of audio-text and visual-text feature similarities.The fine-tuning baseline incorporates lightweight temporal layers to encode temporal relations within the audio and visual modalities, using OV-AVEBench training data for model fine-tuning.We evaluate these baselines on the proposed OV-AVEBench dataset and discuss potential directions for future work in this new field.

Cite

Text

Zhou et al. "Towards Open-Vocabulary Audio-Visual Event Localization." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00783

Markdown

[Zhou et al. "Towards Open-Vocabulary Audio-Visual Event Localization." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/zhou2025cvpr-openvocabulary/) doi:10.1109/CVPR52734.2025.00783

BibTeX

@inproceedings{zhou2025cvpr-openvocabulary,
  title     = {{Towards Open-Vocabulary Audio-Visual Event Localization}},
  author    = {Zhou, Jinxing and Guo, Dan and Guo, Ruohao and Mao, Yuxin and Hu, Jingjing and Zhong, Yiran and Chang, Xiaojun and Wang, Meng},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {8362-8371},
  doi       = {10.1109/CVPR52734.2025.00783},
  url       = {https://mlanthology.org/cvpr/2025/zhou2025cvpr-openvocabulary/}
}