Texture-Based Error Analysis for Image Super-Resolution

Abstract

Evaluation practices for image super-resolution (SR) use a single-value metric, the PSNR or SSIM, to determine model performance. This provides little insight into the source of errors and model behavior. Therefore, it is beneficial to move beyond the conventional approach and reconceptualize evaluation with interpretability as our main priority. We focus on a thorough error analysis from a variety of perspectives. Our key contribution is to leverage a texture classifier, which enables us to assign patches with semantic labels, to identify the source of SR errors both globally and locally. We then use this to determine (a) the semantic alignment of SR datasets, (b) how SR models perform on each label, (c) to what extent high-resolution (HR) and SR patches semantically correspond, and more. Through these different angles, we are able to highlight potential pitfalls and blindspots. Our overall investigation highlights numerous unexpected insights. We hope this work serves as an initial step for debugging blackbox SR networks.

Cite

Text

Magid et al. "Texture-Based Error Analysis for Image Super-Resolution." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00216

Markdown

[Magid et al. "Texture-Based Error Analysis for Image Super-Resolution." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/magid2022cvpr-texturebased/) doi:10.1109/CVPR52688.2022.00216

BibTeX

@inproceedings{magid2022cvpr-texturebased,
  title     = {{Texture-Based Error Analysis for Image Super-Resolution}},
  author    = {Magid, Salma Abdel and Lin, Zudi and Wei, Donglai and Zhang, Yulun and Gu, Jinjin and Pfister, Hanspeter},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {2118-2127},
  doi       = {10.1109/CVPR52688.2022.00216},
  url       = {https://mlanthology.org/cvpr/2022/magid2022cvpr-texturebased/}
}