Detail-Revealing Deep Video Super-Resolution
Abstract
Previous CNN-based video super-resolution approaches need to align multiple frames to the reference. In this paper, we show that proper frame alignment and motion compensation is crucial for achieving high quality results. We accordingly propose a 'sub-pixel motion compensation' (SPMC) layer in a CNN framework. Analysis and experiments show the suitability of this layer in video SR. The final end-to-end, scalable CNN framework effectively incorporates the SPMC layer and fuses multiple frames to reveal image details. Our implementation can generate visually and quantitatively high-quality results, superior to current state-of-the-arts, without the need of parameter tuning.
Cite
Text
Tao et al. "Detail-Revealing Deep Video Super-Resolution." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.479Markdown
[Tao et al. "Detail-Revealing Deep Video Super-Resolution." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/tao2017iccv-detailrevealing/) doi:10.1109/ICCV.2017.479BibTeX
@inproceedings{tao2017iccv-detailrevealing,
title = {{Detail-Revealing Deep Video Super-Resolution}},
author = {Tao, Xin and Gao, Hongyun and Liao, Renjie and Wang, Jue and Jia, Jiaya},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.479},
url = {https://mlanthology.org/iccv/2017/tao2017iccv-detailrevealing/}
}