Multi-Source Diffusion Models for Simultaneous Music Generation and Separation
Abstract
In this work, we define a diffusion-based generative model capable of both music generation and source separation by learning the score of the joint probability density of sources sharing a context. Alongside the classic total inference tasks (i.e., generating a mixture, separating the sources), we also introduce and experiment on the partial generation task of source imputation, where we generate a subset of the sources given the others (e.g., play a piano track that goes well with the drums). Additionally, we introduce a novel inference method for the separation task based on Dirac likelihood functions. We train our model on Slakh2100, a standard dataset for musical source separation, provide qualitative results in the generation settings, and showcase competitive quantitative results in the source separation setting. Our method is the first example of a single model that can handle both generation and separation tasks, thus representing a step toward general audio models.
Cite
Text
Mariani et al. "Multi-Source Diffusion Models for Simultaneous Music Generation and Separation." International Conference on Learning Representations, 2024.Markdown
[Mariani et al. "Multi-Source Diffusion Models for Simultaneous Music Generation and Separation." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/mariani2024iclr-multisource/)BibTeX
@inproceedings{mariani2024iclr-multisource,
title = {{Multi-Source Diffusion Models for Simultaneous Music Generation and Separation}},
author = {Mariani, Giorgio and Tallini, Irene and Postolache, Emilian and Mancusi, Michele and Cosmo, Luca and Rodolà, Emanuele},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/mariani2024iclr-multisource/}
}