Paint by Example: Exemplar-Based Image Editing with Diffusion Models

Abstract

Language-guided image editing has achieved great success recently. In this paper, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.

Cite

Text

Yang et al. "Paint by Example: Exemplar-Based Image Editing with Diffusion Models." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01763

Markdown

[Yang et al. "Paint by Example: Exemplar-Based Image Editing with Diffusion Models." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/yang2023cvpr-paint/) doi:10.1109/CVPR52729.2023.01763

BibTeX

@inproceedings{yang2023cvpr-paint,
  title     = {{Paint by Example: Exemplar-Based Image Editing with Diffusion Models}},
  author    = {Yang, Binxin and Gu, Shuyang and Zhang, Bo and Zhang, Ting and Chen, Xuejin and Sun, Xiaoyan and Chen, Dong and Wen, Fang},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {18381-18391},
  doi       = {10.1109/CVPR52729.2023.01763},
  url       = {https://mlanthology.org/cvpr/2023/yang2023cvpr-paint/}
}