A Universal Music Translation Network
Abstract
We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training. We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.
Cite
Text
Mor et al. "A Universal Music Translation Network." International Conference on Learning Representations, 2019.Markdown
[Mor et al. "A Universal Music Translation Network." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/mor2019iclr-universal/)BibTeX
@inproceedings{mor2019iclr-universal,
title = {{A Universal Music Translation Network}},
author = {Mor, Noam and Wolf, Lior and Polyak, Adam and Taigman, Yaniv},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/mor2019iclr-universal/}
}