Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Abstract
No-reference image quality assessment (NR-IQA) aims to quantify how humans perceive visual distortions of digital images without access to their undistorted references. NR-IQA models are extensively studied in computational vision, and are widely used for performance evaluation and perceptual optimization of man-made vision systems. Here we make one of the first attempts to examine the perceptual robustness of NR-IQA models. Under a Lagrangian formulation, we identify insightful connections of the proposed perceptual attack to previous beautiful ideas in computer vision and machine learning. We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models (as approximations to human perception of just-noticeable differences). Through carefully designed psychophysical experiments, we find that all four NR-IQA models are vulnerable to the proposed perceptual attack. More interestingly, we observe that the generated counterexamples are not transferable, manifesting themselves as distinct design flows of respective NR-IQA methods. Source code are available at https://github.com/zwx8981/PerceptualAttack_BIQA.
Cite
Text
Zhang et al. "Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop." Neural Information Processing Systems, 2022.Markdown
[Zhang et al. "Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/zhang2022neurips-perceptual/)BibTeX
@inproceedings{zhang2022neurips-perceptual,
title = {{Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop}},
author = {Zhang, Weixia and Li, Dingquan and Min, Xiongkuo and Zhai, Guangtao and Guo, Guodong and Yang, Xiaokang and Ma, Kede},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/zhang2022neurips-perceptual/}
}