Improving Musical Accompaniment Co-Creation via Diffusion Transformers
Abstract
Building upon Diff-A-Riff, a latent diffusion model for musical instrument accompaniment generation, we present a series of improvements targeting quality, diversity, inference speed, and text-driven control. First, we upgrade the underlying autoencoder to a stereo-capable model with superior fidelity and replace the latent U-Net with a Diffusion Transformer. Additionally, we refine text prompting by training a cross-modality predictive network to translate text-derived CLAP embeddings to audio-derived CLAP embeddings. Finally, we improve inference speed by training the latent model using a consistency framework, achieving competitive quality with fewer denoising steps. Our model is evaluated against the original Diff-A-Riff variant using objective metrics in ablation experiments, demonstrating promising advancements in all targeted areas: Sound examples are available at https://sonycslparis.github.io/improved_dar/.
Cite
Text
Nistal et al. "Improving Musical Accompaniment Co-Creation via Diffusion Transformers." NeurIPS 2024 Workshops: Audio_Imagination, 2024.Markdown
[Nistal et al. "Improving Musical Accompaniment Co-Creation via Diffusion Transformers." NeurIPS 2024 Workshops: Audio_Imagination, 2024.](https://mlanthology.org/neuripsw/2024/nistal2024neuripsw-improving/)BibTeX
@inproceedings{nistal2024neuripsw-improving,
title = {{Improving Musical Accompaniment Co-Creation via Diffusion Transformers}},
author = {Nistal, Javier and Pasini, Marco and Lattner, Stefan},
booktitle = {NeurIPS 2024 Workshops: Audio_Imagination},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/nistal2024neuripsw-improving/}
}