DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization

Abstract

The objective of text-to-image (T2I) personalization is to customize a diffusion model to a user-provided reference concept generating diverse images of the concept aligned with the target prompts. Conventional methods representing the reference concepts using unique text embeddings often fail to accurately mimic the appearance of the reference. To address this one solution may be explicitly conditioning the reference images into the target denoising process known as key-value replacement. However prior works are constrained to local editing since they disrupt the structure path of the pre-trained T2I model. To overcome this we propose a novel plug-in method called DreamMatcher which reformulates T2I personalization as semantic matching. Specifically DreamMatcher replaces the target values with reference values aligned by semantic matching while leaving the structure path unchanged to preserve the versatile capability of pre-trained T2I models for generating diverse structures. We also introduce a semantic-consistent masking strategy to isolate the personalized concept from irrelevant regions introduced by the target prompts. Compatible with existing T2I models DreamMatcher shows significant improvements in complex scenarios. Intensive analyses demonstrate the effectiveness of our approach.

Cite

Text

Nam et al. "DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00774

Markdown

[Nam et al. "DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/nam2024cvpr-dreammatcher/) doi:10.1109/CVPR52733.2024.00774

BibTeX

@inproceedings{nam2024cvpr-dreammatcher,
  title     = {{DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization}},
  author    = {Nam, Jisu and Kim, Heesu and Lee, DongJae and Jin, Siyoon and Kim, Seungryong and Chang, Seunggyu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {8100-8110},
  doi       = {10.1109/CVPR52733.2024.00774},
  url       = {https://mlanthology.org/cvpr/2024/nam2024cvpr-dreammatcher/}
}