Style Aligned Image Generation via Shared Attention

Abstract

Large-scale Text-to-Image (T2I) models have rapidly gained prominence across creative fields generating visually compelling outputs from textual prompts. However controlling these models to ensure consistent style remains challenging with existing methods necessitating fine-tuning and manual intervention to disentangle content and style. In this paper we introduce StyleAligned a novel technique designed to establish style alignment among a series of generated images. By employing minimal `attention sharing' during the diffusion process our method maintains style consistency across images within T2I models. This approach allows for the creation of style-consistent images using a reference style through a straightforward inversion operation. Our method's evaluation across diverse styles and text prompts demonstrates high-quality synthesis and fidelity underscoring its efficacy in achieving consistent style across various inputs.

Cite

Text

Hertz et al. "Style Aligned Image Generation via Shared Attention." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00457

Markdown

[Hertz et al. "Style Aligned Image Generation via Shared Attention." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/hertz2024cvpr-style/) doi:10.1109/CVPR52733.2024.00457

BibTeX

@inproceedings{hertz2024cvpr-style,
  title     = {{Style Aligned Image Generation via Shared Attention}},
  author    = {Hertz, Amir and Voynov, Andrey and Fruchter, Shlomi and Cohen-Or, Daniel},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {4775-4785},
  doi       = {10.1109/CVPR52733.2024.00457},
  url       = {https://mlanthology.org/cvpr/2024/hertz2024cvpr-style/}
}