MT3: Multi-Task Multitrack Music Transcription
Abstract
Automatic Music Transcription (AMT), inferring musical notes from raw audio, is a challenging task at the core of music understanding. Unlike Automatic Speech Recognition (ASR), which typically focuses on the words of a single speaker, AMT often requires transcribing multiple instruments simultaneously, all while preserving fine-scale pitch and timing information. Further, many AMT datasets are ``low-resource'', as even expert musicians find music transcription difficult and time-consuming. Thus, prior work has focused on task-specific architectures, tailored to the individual instruments of each task. In this work, motivated by the promising results of sequence-to-sequence transfer learning for low-resource Natural Language Processing (NLP), we demonstrate that a general-purpose Transformer model can perform multi-task AMT, jointly transcribing arbitrary combinations of musical instruments across several transcription datasets. We show this unified training framework achieves high-quality transcription results across a range of datasets, dramatically improving performance for low-resource instruments (such as guitar), while preserving strong performance for abundant instruments (such as piano). Finally, by expanding the scope of AMT, we expose the need for more consistent evaluation metrics and better dataset alignment, and provide a strong baseline for this new direction of multi-task AMT.
Cite
Text
Gardner et al. "MT3: Multi-Task Multitrack Music Transcription." International Conference on Learning Representations, 2022.Markdown
[Gardner et al. "MT3: Multi-Task Multitrack Music Transcription." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/gardner2022iclr-mt3/)BibTeX
@inproceedings{gardner2022iclr-mt3,
title = {{MT3: Multi-Task Multitrack Music Transcription}},
author = {Gardner, Joshua P and Simon, Ian and Manilow, Ethan and Hawthorne, Curtis and Engel, Jesse},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/gardner2022iclr-mt3/}
}