Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities
Abstract
Text-to-image synthesis has witnessed remarkable advancements in recent years. Many attempts have been made to adopt text-to-image models to support multiple tasks. However, existing approaches typically require resource-intensive re-training or additional parameters to accommodate for the new tasks, which makes the model inefficient for on-device deployment. We propose Multi-Task Upcycling (MTU), a simple yet effective recipe that extends the capabilities of a pre-trained text-to-image diffusion model to support a variety of image-to-image generation tasks. MTU replaces Feed-Forward Network (FFN) layers in the diffusion model with smaller FFNs, referred to as experts, and combines them with a dynamic routing mechanism. To the best of our knowledge, MTU is the first multi-task diffusion modeling approach that seamlessly blends multi-tasking with on-device compatibility, by mitigating the issue of parameter inflation. We show that the performance of MTU is on par with the single-task fine-tuned diffusion models across several tasks including image editing, super-resolution, and inpainting, while maintaining similar latency and computational load (GFLOPs) as the single-task fine-tuned models.
Cite
Text
Chavhan et al. "Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Chavhan et al. "Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/chavhan2025icml-upcycling/)BibTeX
@inproceedings{chavhan2025icml-upcycling,
title = {{Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities}},
author = {Chavhan, Ruchika and Mehrotra, Abhinav and Chadwick, Malcolm and Couto Pimentel Ramos, Alberto Gil and Morreale, Luca and Noroozi, Mehdi and Bhattacharya, Sourav},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {7578-7594},
volume = {267},
url = {https://mlanthology.org/icml/2025/chavhan2025icml-upcycling/}
}