Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 Challenge: Report
Abstract
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
Cite
Text
Ignatov et al. "Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 Challenge: Report." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25066-8_5Markdown
[Ignatov et al. "Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 Challenge: Report." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/ignatov2022eccvw-efficient-a/) doi:10.1007/978-3-031-25066-8_5BibTeX
@inproceedings{ignatov2022eccvw-efficient-a,
title = {{Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 Challenge: Report}},
author = {Ignatov, Andrey and Timofte, Radu and Denna, Maurizio and Younes, Abdel and Gankhuyag, Ganzorig and Huh, Jingang and Kim, Myeong Kyun and Yoon, Kihwan and Moon, Hyeon-Cheol and Lee, Seungho and Choe, Yoonsik and Jeong, Jinwoo and Kim, Sungjei and Smyl, Maciej and Latkowski, Tomasz and Kubik, Pawel and Sokolski, Michal and Ma, Yujie and Chao, Jiahao and Zhou, Zhou and Gao, Hongfan and Yang, Zhengfeng and Zeng, Zhenbing and Zhuge, Zhengyang and Li, Chenghua and Zhu, Dan and Sun, Mengdi and Duan, Ran and Gao, Yan and Kong, Lingshun and Sun, Long and Li, Xiang and Zhang, Xingdong and Zhang, Jiawei and Wu, Yaqi and Pan, Jinshan and Yu, Gaocheng and Zhang, Jin and Zhang, Feng and Ma, Zhe and Wang, Hongbin and Cho, Hojin and Kim, Steve and Li, Huaen and Ma, Yanbo and Luo, Ziwei and Li, Youwei and Yu, Lei and Wen, Zhihong and Wu, Qi and Fan, Haoqiang and Liu, Shuaicheng and Zhang, Lize and Zong, Zhikai and Kwon, Jeremy and Zhang, Junxi and Li, Mengyuan and Fu, Nianxiang and Ding, Guanchen and Zhu, Han and Chen, Zhenzhong and Li, Gen and Zhang, Yuanfan and Sun, Lei and Zhang, Dafeng and Yang, Neo and Liu, Fitz and Zhao, Jerry and Ayazoglu, Mustafa and Bilecen, Bahri Batuhan and Hirose, Shota and Arunruangsirilert, Kasidis and Ao, Luo and Leung, Ho Chun and Wei, Andrew and Liu, Jie and Liu, Qiang and Yu, Dahai and Li, Ao and Luo, Lei and Zhu, Ce and Hong, Seongmin and Park, Dongwon and Lee, Joonhee and Lee, Byeong Hyun and Lee, Seunggyu and Chun, Se Young and He, Ruiyuan and Jiang, Xuhao and Ruan, Haihang and Zhang, Xinjian and Liu, Jing and Gendy, Garas and Sabor, Nabil and Hou, Jingchao and He, Guanghui},
booktitle = {European Conference on Computer Vision Workshops},
year = {2022},
pages = {92-129},
doi = {10.1007/978-3-031-25066-8_5},
url = {https://mlanthology.org/eccvw/2022/ignatov2022eccvw-efficient-a/}
}