Audio-Visual Generalized Zero-Shot Learning Using Pre-Trained Large Multi-Modal Models
Abstract
Audio-visual zero-shot learning methods commonly build on features extracted from pre-trained models, e.g. video or audio classification models. However, existing benchmarks predate the popularization of large multi-modal models, such as CLIP and CLAP. In this work, we explore such large pre-trained models to obtain features, i.e. CLIP for visual features, and CLAP for audio features. Furthermore, the CLIP and CLAP text encoders provide class label embeddings which are combined to boost the performance of the system. We propose a simple yet effective model that only relies on feed-forward neural networks, exploiting the strong generalization capabilities of the new audio, visual and textual features. Our framework achieves state-of-the-art performance on VGGSound-GZSLcls, UCF-GZSLcls, and ActivityNet-GZSLcls with our new features. Code and data available at: https://github.com/dkurzend/ClipClap-GZSL.
Cite
Text
Kurzendörfer et al. "Audio-Visual Generalized Zero-Shot Learning Using Pre-Trained Large Multi-Modal Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00269Markdown
[Kurzendörfer et al. "Audio-Visual Generalized Zero-Shot Learning Using Pre-Trained Large Multi-Modal Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/kurzendorfer2024cvprw-audiovisual/) doi:10.1109/CVPRW63382.2024.00269BibTeX
@inproceedings{kurzendorfer2024cvprw-audiovisual,
title = {{Audio-Visual Generalized Zero-Shot Learning Using Pre-Trained Large Multi-Modal Models}},
author = {Kurzendörfer, David and Mercea, Otniel-Bogdan and Koepke, A. Sophia and Akata, Zeynep},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2024},
pages = {2627-2638},
doi = {10.1109/CVPRW63382.2024.00269},
url = {https://mlanthology.org/cvprw/2024/kurzendorfer2024cvprw-audiovisual/}
}