MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating the Diffusion Path
Abstract
Image generation using diffusion can be controlled in multiple ways. In this paper, we systematically analyze the equations of modern generative diffusion networks to propose a framework, called MDP, that explains the design space of suitable manipulations. We identify 5 different manipulations, including intermediate latent, conditional embedding, cross attention maps, guidance, and predicted noise. We analyze the corresponding parameters of these manipulations and the manipulation schedule. We show that some previous editing methods fit nicely into our framework. Particularly, we identified one specific configuration as a new type of control by manipulating the predicted noise, which can perform higher-quality edits than previous work for a variety of local and global edits.
Cite
Text
Wang et al. "MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating the Diffusion Path." Transactions on Machine Learning Research, 2024.Markdown
[Wang et al. "MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating the Diffusion Path." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/wang2024tmlr-mdp/)BibTeX
@article{wang2024tmlr-mdp,
title = {{MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating the Diffusion Path}},
author = {Wang, Qian and Zhang, Biao and Birsak, Michael and Wonka, Peter},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/wang2024tmlr-mdp/}
}