LAR-IQA: A Lightweight, Accurate, and Robust No-Reference Image Quality Assessment Model

Abstract

Recent advancements in the field of No-Reference Image Quality Assessment (NR-IQA) using deep learning techniques demonstrate high performance across multiple open-source datasets. However, such models are typically very large and complex making them not so suitable for real-world deployment, especially on resource- and battery-constrained mobile devices. To address this limitation, we propose a compact, lightweight NR-IQA model that achieves state-of-the-art (SOTA) performance on ECCV AIM UHD-IQA challenge validation and test datasets while being also nearly 5.7 times faster than the fastest SOTA model. Our model features a dual-branch architecture, with each branch separately trained on synthetically and authentically distorted images which enhances the model’s generalizability across different distortion types. To improve robustness under diverse real-world visual conditions, we additionally incorporate multiple color spaces during the training process. We also demonstrate the higher accuracy of recently proposed Kolmogorov-Arnold Networks (KANs) for final quality regression as compared to the conventional Multi-Layer Perceptrons (MLPs). Our evaluation considering various open-source datasets highlights the practical, high-accuracy, and robust performance of our proposed lightweight model. Code: https://github.com/nasimjamshidi/LAR-IQA .

Cite

Text

Avanaki et al. "LAR-IQA: A Lightweight, Accurate, and Robust No-Reference Image Quality Assessment Model." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91838-4_20

Markdown

[Avanaki et al. "LAR-IQA: A Lightweight, Accurate, and Robust No-Reference Image Quality Assessment Model." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/avanaki2024eccvw-lariqa/) doi:10.1007/978-3-031-91838-4_20

BibTeX

@inproceedings{avanaki2024eccvw-lariqa,
  title     = {{LAR-IQA: A Lightweight, Accurate, and Robust No-Reference Image Quality Assessment Model}},
  author    = {Avanaki, Nasim Jamshidi and Ghildyal, Abhijay and Barman, Nabajeet and Zadtootaghaj, Saman},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {328-345},
  doi       = {10.1007/978-3-031-91838-4_20},
  url       = {https://mlanthology.org/eccvw/2024/avanaki2024eccvw-lariqa/}
}