CNN-Based Cross-Dataset No-Reference Image Quality Assessment
Abstract
Recent works on no-reference image quality assessment (NR-IQA) have reported good performance for various datasets. However, they suffer from significant performance drops in cross-dataset evaluations which indicates poor generalization power. We propose a Siamese architecture and training procedures for cross-dataset deep NR-IQA that achieves clearly better performance. Moreover, we show that the architecture can be further boosted by i) pre-training with a large aesthetics dataset and ii) adding low-level quality cues, sharpness, tone and colourfulness, as additional features.
Cite
Text
Yang et al. "CNN-Based Cross-Dataset No-Reference Image Quality Assessment." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00485Markdown
[Yang et al. "CNN-Based Cross-Dataset No-Reference Image Quality Assessment." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/yang2019iccvw-cnnbased/) doi:10.1109/ICCVW.2019.00485BibTeX
@inproceedings{yang2019iccvw-cnnbased,
title = {{CNN-Based Cross-Dataset No-Reference Image Quality Assessment}},
author = {Yang, Dan and Peltoketo, Veli-Tapani and Kämäräinen, Joni-Kristian},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {3913-3921},
doi = {10.1109/ICCVW.2019.00485},
url = {https://mlanthology.org/iccvw/2019/yang2019iccvw-cnnbased/}
}