MAViL: Masked Audio-Video Learners
Abstract
We present Masked Audio-Video Learners (MAViL) to learn audio-visual representations with three complementary forms of self-supervision: (1) reconstructing masked raw audio and video inputs, (2) intra-modal and inter-modal contrastive learning with masking, and (3) self-training to predict aligned and contextualized audio-video representations learned from the first two objectives. Empirically, MAViL achieves state-of-the-art audio-video classification performance on AudioSet (53.3 mAP) and VGGSound (67.1\% accuracy), surpassing recent self-supervised models and supervised models that utilize external labeled data. Notably, pre-training with MAViL not only enhances performance in multimodal classification and retrieval tasks, but it also improves the representations of each modality in isolation, without relying on information from the other modality during uni-modal fine-tuning or inference. The code and models are available at https://github.com/facebookresearch/MAViL.
Cite
Text
Huang et al. "MAViL: Masked Audio-Video Learners." Neural Information Processing Systems, 2023.Markdown
[Huang et al. "MAViL: Masked Audio-Video Learners." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/huang2023neurips-mavil/)BibTeX
@inproceedings{huang2023neurips-mavil,
title = {{MAViL: Masked Audio-Video Learners}},
author = {Huang, Po-Yao and Sharma, Vasu and Xu, Hu and Ryali, Chaitanya and Fan, Haoqi and Li, Yanghao and Li, Shang-Wen and Ghosh, Gargi and Malik, Jitendra and Feichtenhofer, Christoph},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/huang2023neurips-mavil/}
}