Type-R: Automatically Retouching Typos for Text-to-Image Generation

Abstract

While recent text-to-image models can generate photorealistic images from text prompts that reflect detailed instructions, they still face significant challenges in accurately rendering words in the image.In this paper, we propose to retouch erroneous text renderings in the post-processing pipeline.Our approach, called Type-R, identifies typographical errors in the generated image, erases the erroneous text, regenerates text boxes for missing words, and finally corrects typos in the rendered words.Through extensive experiments, we show that Type-R, in combination with the latest text-to-image models such as Stable Diffusion or Flux, achieves the highest text rendering accuracy while maintaining image quality and also outperforms text-focused generation baselines in terms of balancing text accuracy and image quality.

Cite

Text

Shimoda et al. "Type-R: Automatically Retouching Typos for Text-to-Image Generation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00262

Markdown

[Shimoda et al. "Type-R: Automatically Retouching Typos for Text-to-Image Generation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/shimoda2025cvpr-typer/) doi:10.1109/CVPR52734.2025.00262

BibTeX

@inproceedings{shimoda2025cvpr-typer,
  title     = {{Type-R: Automatically Retouching Typos for Text-to-Image Generation}},
  author    = {Shimoda, Wataru and Inoue, Naoto and Haraguchi, Daichi and Mitani, Hayato and Uchida, Seiichi and Yamaguchi, Kota},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {2745-2754},
  doi       = {10.1109/CVPR52734.2025.00262},
  url       = {https://mlanthology.org/cvpr/2025/shimoda2025cvpr-typer/}
}