Diffusion-Based Conditional Image Editing Through Optimized Inference with Guidance

Abstract

We present a simple but effective training-free approach for text-driven image-to-image translation based on a pretrained text-to-image diffusion model. Our goal is to generate an image that aligns with the target task while preserving the structure and background of a source image. To this end we derive the representation guidance with a combination of two objectives: maximizing the similarity to the target prompt based on the CLIP score and minimizing the structural distance to the source latent variable. This guidance improves the fidelity of the generated target image to the given target prompt while maintaining the structure integrity of the source image. To incorporate the representation guidance component we optimize the target latent variable of diffusion model's reverse process with the guidance. Experimental results demonstrate that our method achieves outstanding image-to-image translation performance on various tasks when combined with the pretrained Stable Diffusion model.

Cite

Text

Lee et al. "Diffusion-Based Conditional Image Editing Through Optimized Inference with Guidance." Winter Conference on Applications of Computer Vision, 2025.

Markdown

[Lee et al. "Diffusion-Based Conditional Image Editing Through Optimized Inference with Guidance." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/lee2025wacv-diffusionbased/)

BibTeX

@inproceedings{lee2025wacv-diffusionbased,
  title     = {{Diffusion-Based Conditional Image Editing Through Optimized Inference with Guidance}},
  author    = {Lee, Hyunsoo and Kang, Minsoo and Han, Bohyung},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2025},
  pages     = {4472-4480},
  url       = {https://mlanthology.org/wacv/2025/lee2025wacv-diffusionbased/}
}