Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos

Abstract

We propose a self-supervised method for learning representations based on spatial audio-visual correspondences in egocentric videos. Our method uses a masked auto-encoding framework to synthesize masked binaural audio through the synergy of audio and vision thereby learning useful spatial relationships between the two modalities. We use our pretrained features to tackle two downstream video tasks requiring spatial understanding in social scenarios: active speaker detection and spatial audio denoising. Through extensive experiments we show that our features are generic enough to improve over multiple state-of-the-art baselines on both tasks on two challenging egocentric video datasets that offer binaural audio EgoCom and EasyCom. Project: http://vision.cs.utexas.edu/ projects/ego_av_corr.

Cite

Text

Majumder et al. "Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02555

Markdown

[Majumder et al. "Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/majumder2024cvpr-learning/) doi:10.1109/CVPR52733.2024.02555

BibTeX

@inproceedings{majumder2024cvpr-learning,
  title     = {{Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos}},
  author    = {Majumder, Sagnik and Al-Halah, Ziad and Grauman, Kristen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {27058-27068},
  doi       = {10.1109/CVPR52733.2024.02555},
  url       = {https://mlanthology.org/cvpr/2024/majumder2024cvpr-learning/}
}