Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation

Abstract

Generating sketches guided by reference styles requires precise transfer of stroke attributes, such as line thickness, deformation, and texture sparsity, while preserving semantic structure and content fidelity. To this end, we propose Stroke2Sketch, a novel training-free framework that introduces cross-image stroke attention, a mechanism embedded within self-attention layers to establish fine-grained semantic correspondences and enable accurate stroke attribute transfer. This allows our method to adaptively integrate reference stroke characteristics into content images while maintaining structural integrity. Additionally, we develop adaptive contrast enhancement and semantic-focused attention to reinforce content preservation and foreground emphasis. Stroke2Sketch effectively synthesizes stylistically faithful sketches that closely resemble handcrafted results, outperforming existing methods in expressive stroke control and semantic coherence.

Cite

Text

Yang et al. "Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation." International Conference on Computer Vision, 2025.

Markdown

[Yang et al. "Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/yang2025iccv-stroke2sketch/)

BibTeX

@inproceedings{yang2025iccv-stroke2sketch,
  title     = {{Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation}},
  author    = {Yang, Rui and Li, Huining and Long, Yiyi and Wu, Xiaojun and He, Shengfeng},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {16545-16554},
  url       = {https://mlanthology.org/iccv/2025/yang2025iccv-stroke2sketch/}
}