Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion
Abstract
Editing signals using large pre-trained models, in a zero-shot manner, has recently seen rapid advancements in the image domain. However, this wave has yet to reach the audio domain. In this paper, we explore two zero-shot editing techniques for audio signals, which use DDPM inversion with pre-trained diffusion models. The first, which we coin *ZEro-shot Text-based Audio (ZETA)* editing, is adopted from the image domain. The second, named *ZEro-shot UnSupervized (ZEUS)* editing, is a novel approach for discovering semantically meaningful editing directions without supervision. When applied to music signals, this method exposes a range of musically interesting modifications, from controlling the participation of specific instruments to improvisations on the melody. Samples, code, and the full version of this paper can be found on the [project web page](https://hilamanor.github.io/AudioEditing/).
Cite
Text
Manor and Michaeli. "Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion." ICML 2024 Workshops: SPIGM, 2024.Markdown
[Manor and Michaeli. "Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion." ICML 2024 Workshops: SPIGM, 2024.](https://mlanthology.org/icmlw/2024/manor2024icmlw-zeroshot/)BibTeX
@inproceedings{manor2024icmlw-zeroshot,
title = {{Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion}},
author = {Manor, Hila and Michaeli, Tomer},
booktitle = {ICML 2024 Workshops: SPIGM},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/manor2024icmlw-zeroshot/}
}