Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception
Abstract
Multi-modal semantics, which aims to ground semantic representations in perception, has relied on feature norms or raw image data for perceptual input. In this paper we examine grounding semantic representations in raw auditory data, using standard evaluations for multi-modal semantics. After having shown the quality of such auditorily grounded representations, we show how they can be applied to tasks where auditory perception is relevant, including two unsupervised categorization experiments, and provide further analysis. We find that features transfered from deep neural networks outperform bag of audio words approaches. To our knowledge, this is the first work to construct multi-modal models from a combination of textual information and auditory information extracted from deep neural networks, and the first work to evaluate the performance of tri-modal (textual, visual and auditory) semantic models.
Cite
Text
Kiela and Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception." Journal of Artificial Intelligence Research, 2017. doi:10.1613/JAIR.5665Markdown
[Kiela and Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception." Journal of Artificial Intelligence Research, 2017.](https://mlanthology.org/jair/2017/kiela2017jair-learning/) doi:10.1613/JAIR.5665BibTeX
@article{kiela2017jair-learning,
title = {{Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception}},
author = {Kiela, Douwe and Clark, Stephen},
journal = {Journal of Artificial Intelligence Research},
year = {2017},
pages = {1003-1030},
doi = {10.1613/JAIR.5665},
volume = {60},
url = {https://mlanthology.org/jair/2017/kiela2017jair-learning/}
}