SR-VQA: Super-Resolution Video Quality Assessment Model
Abstract
As the resolution of user viewing devices continues to advance, many low-resolution videos are enhanced using super-resolution algorithms to improve the viewing experience. However, this enhancement process inevitably introduces distortions into the videos. The types of distortions caused by super-resolution algorithms differ from the typical distortions found in user-generated content (UGC) videos, making it challenging for current mainstream UGC video quality assessment (VQA) methods to evaluate the quality of super-resolution videos accurately. In this paper, we propose the Super-Resolution Video Quality Assessment (SR-VQA) method, based on the UGC quality assessment framework UNQA, to improve the performance of evaluating super-resolution video quality. Recognizing that a richer feature representation can significantly enhance model performance, We first extract five types of features through the spatial feature extraction module, motion feature extraction module, edge feature extraction module, saliency feature extraction module, and content feature extraction module. We concatenate these features and utilize a multi-layer perceptron (MLP) network to regress them into quality scores. Experimental results demonstrate the effectiveness of our proposed model on super-resolution video quality assessment datasets.
Cite
Text
Cao et al. "SR-VQA: Super-Resolution Video Quality Assessment Model." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91856-8_9Markdown
[Cao et al. "SR-VQA: Super-Resolution Video Quality Assessment Model." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/cao2024eccvw-srvqa/) doi:10.1007/978-3-031-91856-8_9BibTeX
@inproceedings{cao2024eccvw-srvqa,
title = {{SR-VQA: Super-Resolution Video Quality Assessment Model}},
author = {Cao, Yuqin and Sun, Wei and Zhang, Weixia and Sun, Yinan and Jia, Ziheng and Zhu, Yuxin and Min, Xiongkuo and Zhai, Guangtao},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {144-159},
doi = {10.1007/978-3-031-91856-8_9},
url = {https://mlanthology.org/eccvw/2024/cao2024eccvw-srvqa/}
}