Unsupervised Transcription of Piano Music
Abstract
We present a new probabilistic model for transcribing piano music from audio to a symbolic form. Our model reflects the process by which discrete musical events give rise to acoustic signals that are then superimposed to produce the observed data. As a result, the inference procedure for our model naturally resolves the source separation problem introduced by the the piano's polyphony. In order to adapt to the properties of a new instrument or acoustic environment being transcribed, we learn recording specific spectral profiles and temporal envelopes in an unsupervised fashion. Our system outperforms the best published approaches on a standard piano transcription task, achieving a 10.6% relative gain in note onset F1 on real piano audio.
Cite
Text
Berg-Kirkpatrick et al. "Unsupervised Transcription of Piano Music." Neural Information Processing Systems, 2014.Markdown
[Berg-Kirkpatrick et al. "Unsupervised Transcription of Piano Music." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/bergkirkpatrick2014neurips-unsupervised/)BibTeX
@inproceedings{bergkirkpatrick2014neurips-unsupervised,
title = {{Unsupervised Transcription of Piano Music}},
author = {Berg-Kirkpatrick, Taylor and Andreas, Jacob and Klein, Dan},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {1538-1546},
url = {https://mlanthology.org/neurips/2014/bergkirkpatrick2014neurips-unsupervised/}
}