Label-Efficient Audio Classification Through Multitask Learning and Self-Supervision
Abstract
While deep learning has been incredibly successful in modeling tasks with large, carefully curated labeled datasets, its application to problems with limited labeled data remains a challenge. The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through a combination of multitask learning and self-supervised learning on unlabeled data. We trained an end-to-end audio feature extractor based on WaveNet that feeds into simple, yet versatile task-specific neural networks. We describe several easily implemented self-supervised learning tasks that can operate on any large, unlabeled audio corpus. We demonstrate that, in scenarios with limited labeled training data, one can significantly improve the performance of three different supervised classification tasks individually by up to 6% through simultaneous training with these additional self-supervised tasks. We also show that incorporating data augmentation into our multitask setting leads to even further gains in performance.
Cite
Text
Lee et al. "Label-Efficient Audio Classification Through Multitask Learning and Self-Supervision." ICLR 2019 Workshops: LLD, 2019.Markdown
[Lee et al. "Label-Efficient Audio Classification Through Multitask Learning and Self-Supervision." ICLR 2019 Workshops: LLD, 2019.](https://mlanthology.org/iclrw/2019/lee2019iclrw-labelefficient/)BibTeX
@inproceedings{lee2019iclrw-labelefficient,
title = {{Label-Efficient Audio Classification Through Multitask Learning and Self-Supervision}},
author = {Lee, Tyler and Gong, Ting and Padhy, Suchismita and Rouditchenko, Andrew and Ndirango, Anthony},
booktitle = {ICLR 2019 Workshops: LLD},
year = {2019},
url = {https://mlanthology.org/iclrw/2019/lee2019iclrw-labelefficient/}
}