Diffusion Domain Expansion: Learning to Coordinate Pre-Trained Diffusion Models
Abstract
In this paper, we propose Diffusion Domain Expansion (DDE), a method that efficiently extends pre-trained diffusion models to generate larger objects and handle more complex conditioning beyond their original capabilities. Our method employs a compact trainable network designed to coordinate the denoised outputs of pre-trained diffusion models. We demonstrate that the coordinator can be universally simple while being capable of generalizing to domains larger than those observed during its training time. We evaluate DDE on long audio track generation and conditional image generation, demonstrating its applicability across domains. DDE outperforms other approaches to coordinated generation with diffusion models in qualitative and quantitative evaluations.
Cite
Text
Lifar et al. "Diffusion Domain Expansion: Learning to Coordinate Pre-Trained Diffusion Models." ICML 2024 Workshops: SPIGM, 2024.Markdown
[Lifar et al. "Diffusion Domain Expansion: Learning to Coordinate Pre-Trained Diffusion Models." ICML 2024 Workshops: SPIGM, 2024.](https://mlanthology.org/icmlw/2024/lifar2024icmlw-diffusion/)BibTeX
@inproceedings{lifar2024icmlw-diffusion,
title = {{Diffusion Domain Expansion: Learning to Coordinate Pre-Trained Diffusion Models}},
author = {Lifar, Egor and Savkin, Semyon and Garipov, Timur and Tong, Shangyuan and Jaakkola, Tommi},
booktitle = {ICML 2024 Workshops: SPIGM},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/lifar2024icmlw-diffusion/}
}