StrokeNUWA—Tokenizing Strokes for Vector Graphic Synthesis

Abstract

To leverage LLMs for visual synthesis, traditional methods convert raster image information into discrete grid tokens through specialized visual modules, while disrupting the model’s ability to capture the true semantic representation of visual scenes. This paper posits that an alternative representation of images, vector graphics, can effectively surmount this limitation by enabling a more natural and semantically coherent segmentation of the image information. Thus, we introduce StrokeNUWA, a pioneering work exploring a better visual representation "stroke" tokens on vector graphics, which is inherently visual semantics rich, naturally compatible with LLMs, and highly compressed. Equipped with stroke tokens, StrokeNUWA can significantly surpass traditional LLM-based and optimization-based methods across various metrics in the vector graphic generation task. Besides, StrokeNUWA achieves up to a $94\times$ speedup in inference over the speed of prior methods with an exceptional SVG code compression ratio of 6.9%.

Cite

Text

Tang et al. "StrokeNUWA—Tokenizing Strokes for Vector Graphic Synthesis." International Conference on Machine Learning, 2024.

Markdown

[Tang et al. "StrokeNUWA—Tokenizing Strokes for Vector Graphic Synthesis." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/tang2024icml-strokenuwatokenizing/)

BibTeX

@inproceedings{tang2024icml-strokenuwatokenizing,
  title     = {{StrokeNUWA—Tokenizing Strokes for Vector Graphic Synthesis}},
  author    = {Tang, Zecheng and Wu, Chenfei and Zhang, Zekai and Ni, Minheng and Yin, Shengming and Liu, Yu and Yang, Zhengyuan and Wang, Lijuan and Liu, Zicheng and Li, Juntao and Duan, Nan},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {47830-47845},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/tang2024icml-strokenuwatokenizing/}
}