Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events
Abstract
Audio-visual representation learning is an important task from the perspective of designing machines with the ability to understand complex events. To this end, we propose a novel multimodal framework that instantiates multiple instance learning. We show that the learnt representations are useful for classifying events and localizing their characteristic audio-visual elements. The system is trained using only video-level event labels without any timing information. An important feature of our method is its capacity to learn from unsynchronized audio-visual events. We achieve state-of-the-art results on a large-scale dataset of weakly-labeled audio event videos. Visualizations of localized visual regions and audio segments substantiate our system's efficacy, especially when dealing with noisy situations where modality-specific cues appear asynchronously.
Cite
Text
Parekh et al. "Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.Markdown
[Parekh et al. "Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/parekh2018cvprw-weakly/)BibTeX
@inproceedings{parekh2018cvprw-weakly,
title = {{Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events}},
author = {Parekh, Sanjeel and Essid, Slim and Ozerov, Alexey and Duong, Ngoc Q. K. and Pérez, Patrick and Richard, Gaël},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2018},
pages = {2518-2519},
url = {https://mlanthology.org/cvprw/2018/parekh2018cvprw-weakly/}
}