DreamWalk: Style Space Exploration Using Diffusion Guidance

Abstract

Text-conditioned diffusion models can generate impressive images, but fall short when it comes to fine-grained control. Unlike direct-editing tools like Photoshop, text conditioned models require the artist to perform “prompt engineering,” constructing special text sentences to control the style or amount of a particular subject present in the output image. Our goal is to provide fine-grained control over the style and substance specified by the prompt, for example to adjust the intensity of styles in different regions of the image (Fig.  1 ). Our approach is to decompose the text prompt into conceptual elements, and apply a separate guidance term for each element in a single diffusion process. We introduce guidance scale functions to control when in the diffusion process and where in the image to intervene. Since the method is based solely on adjusting diffusion guidance, it does not require fine-tuning or manipulating the internal layers of the diffusion model’s neural network, and can be used in conjunction with LoRA- or DreamBooth-trained models.

Cite

Text

Shu et al. "DreamWalk: Style Space Exploration Using Diffusion Guidance." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-92808-6_7

Markdown

[Shu et al. "DreamWalk: Style Space Exploration Using Diffusion Guidance." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/shu2024eccvw-dreamwalk/) doi:10.1007/978-3-031-92808-6_7

BibTeX

@inproceedings{shu2024eccvw-dreamwalk,
  title     = {{DreamWalk: Style Space Exploration Using Diffusion Guidance}},
  author    = {Shu, Michelle and Herrmann, Charles and Bowen, Richard Strong and Cole, Forrester and Zabih, Ramin},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {104-120},
  doi       = {10.1007/978-3-031-92808-6_7},
  url       = {https://mlanthology.org/eccvw/2024/shu2024eccvw-dreamwalk/}
}