Cross-Modal Prompts: Adapting Large Pre-Trained Models for Audio-Visual Downstream Tasks

Abstract

In recent years, the deployment of large-scale pre-trained models in audio-visual downstream tasks has yielded remarkable outcomes. However, these models, primarily trained on single-modality unconstrained datasets, still encounter challenges in feature extraction for multi-modal tasks, leading to suboptimal performance. This limitation arises due to the introduction of irrelevant modality-specific information during encoding, which adversely affects the performance of downstream tasks. To address this challenge, this paper proposes a novel Dual-Guided Spatial-Channel-Temporal (DG-SCT) attention mechanism. This mechanism leverages audio and visual modalities as soft prompts to dynamically adjust the parameters of pre-trained models based on the current multi-modal input features. Specifically, the DG-SCT module incorporates trainable cross-modal interaction layers into pre-trained audio-visual encoders, allowing adaptive extraction of crucial information from the current modality across spatial, channel, and temporal dimensions, while preserving the frozen parameters of large-scale pre-trained models. Experimental evaluations demonstrate that our proposed model achieves state-of-the-art results across multiple downstream tasks, including AVE, AVVP, AVS, and AVQA. Furthermore, our model exhibits promising performance in challenging few-shot and zero-shot scenarios. The source code and pre-trained models are available at https://github.com/haoyi-duan/DG-SCT.

Cite

Text

Duan et al. "Cross-Modal Prompts: Adapting Large Pre-Trained Models for Audio-Visual Downstream Tasks." Neural Information Processing Systems, 2023.

Markdown

[Duan et al. "Cross-Modal Prompts: Adapting Large Pre-Trained Models for Audio-Visual Downstream Tasks." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/duan2023neurips-crossmodal/)

BibTeX

@inproceedings{duan2023neurips-crossmodal,
  title     = {{Cross-Modal Prompts: Adapting Large Pre-Trained Models for Audio-Visual Downstream Tasks}},
  author    = {Duan, Haoyi and Xia, Yan and Mingze, Zhou and Tang, Li and Zhu, Jieming and Zhao, Zhou},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/duan2023neurips-crossmodal/}
}