Efficient Reference-Based Video Super-Resolution (ERVSR): Single Reference Image Is All You Need

Abstract

Reference-based video super-resolution (RefVSR) is a promising domain of super-resolution that recovers high-frequency textures of a video using reference video. The multiple cameras with different focal lengths in mobile devices aid recent works in RefVSR, which aim to super-resolve a low-resolution ultra-wide video by utilizing wide-angle videos. Previous works in RefVSR used all reference frames of a Ref video at each time step for the super-resolution of low-resolution videos. However, computation on higher-resolution images increases the runtime and memory consumption, hence hinders the practical application of RefVSR. To solve this problem, we propose an Efficient Reference-based Video Super-Resolution (ERVSR) that exploits a single reference frame to super-resolve whole low-resolution video frames. We introduce an attention-based feature align module and an aggregation upsampling module that attends LR features using the correlation between the reference and LR frames. The proposed ERVSR achieves 12xfaster speed, 1/4 memory consumption than previous state-of-the-art RefVSR networks, and competitive performance on the RealMCVSR dataset while using a single reference image.

Cite

Text

Kim et al. "Efficient Reference-Based Video Super-Resolution (ERVSR): Single Reference Image Is All You Need." Winter Conference on Applications of Computer Vision, 2023.

Markdown

[Kim et al. "Efficient Reference-Based Video Super-Resolution (ERVSR): Single Reference Image Is All You Need." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/kim2023wacv-efficient/)

BibTeX

@inproceedings{kim2023wacv-efficient,
  title     = {{Efficient Reference-Based Video Super-Resolution (ERVSR): Single Reference Image Is All You Need}},
  author    = {Kim, Youngrae and Lim, Jinsu and Cho, Hoonhee and Lee, Minji and Lee, Dongman and Yoon, Kuk-Jin and Choi, Ho-Jin},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2023},
  pages     = {1828-1837},
  url       = {https://mlanthology.org/wacv/2023/kim2023wacv-efficient/}
}