MultiBoot Vsr: Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution
Abstract
To make the best use of the previous estimations and shared redundancy across the consecutive video frames, here we propose a scene and class agnostic, fully convolutional neural network model for 4× video super-resolution. One stage of our network is composed of a motion compensation based input subnetwork, a blending backbone, and a spatial upsampling subnetwork. We recurrently apply this network to reconstruct high-resolution frames and then reuse them as additional reference frames after reshuffling them into multiple low-resolution images. This allows us to bootstrap and enhance image quality progressively. Our experiments show that our method generates temporally consistent and high-quality results without artifacts. Our method is ranked as the second best based on the SSIM scores on the NTIRE2019 VSR Challenge, Clean Track.
Cite
Text
Kalarot and Porikli. "MultiBoot Vsr: Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00258Markdown
[Kalarot and Porikli. "MultiBoot Vsr: Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/kalarot2019cvprw-multiboot/) doi:10.1109/CVPRW.2019.00258BibTeX
@inproceedings{kalarot2019cvprw-multiboot,
title = {{MultiBoot Vsr: Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution}},
author = {Kalarot, Ratheesh and Porikli, Fatih},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {2060-2069},
doi = {10.1109/CVPRW.2019.00258},
url = {https://mlanthology.org/cvprw/2019/kalarot2019cvprw-multiboot/}
}