Text-Aware Real-World Image Super-Resolution via Diffusion Model with Joint Segmentation Decoders

Abstract

The introduction of generative models has significantly advanced image super-resolution (SR) in handling real-world degradations. However, they often incur fidelity-related issues, particularly distorting textual structures. In this paper, we introduce a novel diffusion-based SR framework, namely TADiSR, which integrates text-aware attention and joint segmentation decoders to recover not only natural details but also the structural fidelity of text regions in degraded real-world images. Moreover, we propose a complete pipeline for synthesizing high-quality images with fine-grained full-image text masks, combining realistic foreground text regions with detailed background content. Extensive experiments demonstrate that our approach substantially enhances text legibility in super-resolved images, achieving state-of-the-art performance across multiple evaluation metrics and exhibiting strong generalization to real-world scenarios. Our code is available at [here](https://github.com/mingcv/TADiSR).

Cite

Text

Hu et al. "Text-Aware Real-World Image Super-Resolution via Diffusion Model with Joint Segmentation Decoders." Advances in Neural Information Processing Systems, 2025.

Markdown

[Hu et al. "Text-Aware Real-World Image Super-Resolution via Diffusion Model with Joint Segmentation Decoders." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/hu2025neurips-textaware/)

BibTeX

@inproceedings{hu2025neurips-textaware,
  title     = {{Text-Aware Real-World Image Super-Resolution via Diffusion Model with Joint Segmentation Decoders}},
  author    = {Hu, Qiming and Fan, Linlong and Luoyiyan,  and Yu, Yuhang and Guo, Xiaojie and Fan, Qingnan},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/hu2025neurips-textaware/}
}