Contrastive Audio-Visual Masked Autoencoder
Abstract
In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining contrastive learning and masked data modeling, two major self-supervised learning frameworks, to learn a joint and coordinated audio-visual representation. Our experiments show that the contrastive audio-visual correspondence learning objective not only enables the model to perform audio-visual retrieval tasks, but also helps the model learn a better joint representation. As a result, our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet in the audio-visual event classification task. Code and pretrained models are at https://github.com/yuangongnd/cav-mae.
Cite
Text
Gong et al. "Contrastive Audio-Visual Masked Autoencoder." International Conference on Learning Representations, 2023.Markdown
[Gong et al. "Contrastive Audio-Visual Masked Autoencoder." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/gong2023iclr-contrastive/)BibTeX
@inproceedings{gong2023iclr-contrastive,
title = {{Contrastive Audio-Visual Masked Autoencoder}},
author = {Gong, Yuan and Rouditchenko, Andrew and Liu, Alexander H. and Harwath, David and Karlinsky, Leonid and Kuehne, Hilde and Glass, James R.},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/gong2023iclr-contrastive/}
}