Attention Bottlenecks for Multimodal Fusion
Abstract
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks.A common approach for building multimodal models is to simply combine multiple of these modality-specific architectures using late-stage fusion of final representations or predictions ('late-fusion').Instead, we introduce a novel transformer based architecture that uses 'attention bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, these bottlenecks force information between different modalities to pass through a small number of '`bottleneck' latent units, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
Cite
Text
Nagrani et al. "Attention Bottlenecks for Multimodal Fusion." Neural Information Processing Systems, 2021.Markdown
[Nagrani et al. "Attention Bottlenecks for Multimodal Fusion." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/nagrani2021neurips-attention/)BibTeX
@inproceedings{nagrani2021neurips-attention,
title = {{Attention Bottlenecks for Multimodal Fusion}},
author = {Nagrani, Arsha and Yang, Shan and Arnab, Anurag and Jansen, Aren and Schmid, Cordelia and Sun, Chen},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/nagrani2021neurips-attention/}
}