Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment

Abstract

Previous full-reference image quality assessment methods aim to evaluate the quality of images impaired by traditional distortions such as JPEG, white noise, Gaussian blur, and so on. However, there is a lack of research measuring the quality of images generated by various image processing algorithms, including super-resolution, denoising, restoration, etc. Motivated by the previous model that predicts the distortion sensitivity maps, we use the DeepQA as a baseline model on a challenge database that includes various distortions. We have further improved the baseline model by dividing it into three parts and modifying each: 1) distortion encoding network, 2) sensitivity generation network, and 3) score regression. Through rigorous experiments, the proposed model achieves better prediction accuracy on the challenge database than other methods. Also, the proposed method shows better visualization results compared to the baseline model. We submitted our model in NTIRE 2021 Perceptual Image Quality Assessment Challenge and won 12th in the main score.

Cite

Text

Ahn et al. "Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00044

Markdown

[Ahn et al. "Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/ahn2021cvprw-deep/) doi:10.1109/CVPRW53098.2021.00044

BibTeX

@inproceedings{ahn2021cvprw-deep,
  title     = {{Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment}},
  author    = {Ahn, Sewoong and Choi, Yeji and Yoon, Kwangjin},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {344-353},
  doi       = {10.1109/CVPRW53098.2021.00044},
  url       = {https://mlanthology.org/cvprw/2021/ahn2021cvprw-deep/}
}