DiffSketcher: Text Guided Vector Sketch Synthesis Through Latent Diffusion Models

Abstract

Even though trained mainly on images, we discover that pretrained diffusion models show impressive power in guiding sketch synthesis. In this paper, we present DiffSketcher, an innovative algorithm that creates \textit{vectorized} free-hand sketches using natural language input. DiffSketcher is developed based on a pre-trained text-to-image diffusion model. It performs the task by directly optimizing a set of Bézier curves with an extended version of the score distillation sampling (SDS) loss, which allows us to use a raster-level diffusion model as a prior for optimizing a parametric vectorized sketch generator. Furthermore, we explore attention maps embedded in the diffusion model for effective stroke initialization to speed up the generation process. The generated sketches demonstrate multiple levels of abstraction while maintaining recognizability, underlying structure, and essential visual details of the subject drawn. Our experiments show that DiffSketcher achieves greater quality than prior work. The code and demo of DiffSketcher can be found at https://ximinng.github.io/DiffSketcher-project/.

Cite

Text

Xing et al. "DiffSketcher: Text Guided Vector Sketch Synthesis Through Latent Diffusion Models." Neural Information Processing Systems, 2023.

Markdown

[Xing et al. "DiffSketcher: Text Guided Vector Sketch Synthesis Through Latent Diffusion Models." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/xing2023neurips-diffsketcher/)

BibTeX

@inproceedings{xing2023neurips-diffsketcher,
  title     = {{DiffSketcher: Text Guided Vector Sketch Synthesis Through Latent Diffusion Models}},
  author    = {Xing, XiMing and Wang, Chuang and Zhou, Haitao and Zhang, Jing and Yu, Qian and Xu, Dong},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/xing2023neurips-diffsketcher/}
}