AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and Results
Abstract
This paper reviews the video extreme super-resolution challenge associated with the AIM 2020 workshop at ECCV 2020. Common scaling factors for learned video super-resolution (VSR) do not go beyond factor 4. Missing information can be restored well in this region, especially in HR videos, where the high-frequency content mostly consists of texture details. The task in this challenge is to upscale videos with an extreme factor of 16, which results in more serious degradations that also affect the structural integrity of the videos. A single pixel in the low-resolution (LR) domain corresponds to 256 pixels in the high-resolution (HR) domain. Due to this massive information loss, it is hard to accurately restore the missing information. Track 1 is set up to gauge the state-of-the-art for such a demanding task, where fidelity to the ground truth is measured by PSNR and SSIM. Perceptually higher quality can be achieved in trade-off for fidelity by generating plausible high-frequency content. Track 2 therefore aims at generating visually pleasing results, which are ranked according to human perception, evaluated by a user study. In contrast to single image super-resolution (SISR), VSR can benefit from additional information in the temporal domain. However, this also imposes an additional requirement, as the generated frames need to be consistent along time.
Cite
Text
Fuoli et al. "AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and Results." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66823-5_4Markdown
[Fuoli et al. "AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and Results." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/fuoli2020eccvw-aim/) doi:10.1007/978-3-030-66823-5_4BibTeX
@inproceedings{fuoli2020eccvw-aim,
title = {{AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and Results}},
author = {Fuoli, Dario and Huang, Zhiwu and Gu, Shuhang and Timofte, Radu and Raventos, Arnau and Esfandiari, Aryan and Karout, Salah and Xu, Xuan and Li, Xin and Xiong, Xin and Wang, Jinge and Michelini, Pablo Navarrete and Zhang, Wenhao and Zhang, Dongyang and Zhu, Hanwei and Xia, Dan and Chen, Haoyu and Gu, Jinjin and Zhang, Zhi and Zhao, Tongtong and Zhao, Shanshan and Akita, Kazutoshi and Ukita, Norimichi and Hrishikesh, P. S and Puthussery, Densen and Jiji, C. V.},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {57-81},
doi = {10.1007/978-3-030-66823-5_4},
url = {https://mlanthology.org/eccvw/2020/fuoli2020eccvw-aim/}
}