Quantized Warping and Residual Temporal Integration for Video Super-Resolution on Fast Motions
Abstract
In recent years, numerous deep learning approaches to video super resolution have been proposed, increasing the resolution of one frame using information found in neighboring frames. Such methods either warp frames into alignment using optical flow, or else forgo warping and use optical flow as an additional network input. In this work we point out the disadvantages inherent in these two approaches and propose one that inherits the best features of both, warping with the integer part of the flow and using the fractional part as network input. Moreover, an iterative residual super-resolution approach is proposed to incrementally improve quality as more neighboring frames are provided. Incorporating the above in a recurrent architecture, we train, evaluate and compare the proposed network to the SotA, and note its superior performance in faster motion sequences.
Cite
Text
Karageorgos et al. "Quantized Warping and Residual Temporal Integration for Video Super-Resolution on Fast Motions." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-67070-2_41Markdown
[Karageorgos et al. "Quantized Warping and Residual Temporal Integration for Video Super-Resolution on Fast Motions." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/karageorgos2020eccvw-quantized/) doi:10.1007/978-3-030-67070-2_41BibTeX
@inproceedings{karageorgos2020eccvw-quantized,
title = {{Quantized Warping and Residual Temporal Integration for Video Super-Resolution on Fast Motions}},
author = {Karageorgos, Konstantinos and Zafeirouli, Kassiani and Konstantoudakis, Konstantinos and Dimou, Anastasios and Daras, Petros},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {682-697},
doi = {10.1007/978-3-030-67070-2_41},
url = {https://mlanthology.org/eccvw/2020/karageorgos2020eccvw-quantized/}
}