Omniscient Video Super-Resolution
Abstract
Most recent video super-resolution (SR) methods either adopt an iterative manner to deal with low-resolution (LR) frames from a temporally sliding window, or leverage the previously estimated SR output to help reconstruct the current frame recurrently. A few studies try to combine these two structures to form a hybrid framework but have failed to give full play to it. In this paper, we propose an omniscient framework to not only utilize the preceding SR output, but also leverage the SR outputs from the present and future. The omniscient framework is more generic because the iterative, recurrent and hybrid frameworks can be regarded as its special cases. The proposed omniscient framework enables a generator to behave better than its counterparts under other frameworks. Abundant experiments on public datasets show that our method is superior to the state-of-the-art methods in objective metrics, subjective visual effects and complexity.
Cite
Text
Yi et al. "Omniscient Video Super-Resolution." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00439Markdown
[Yi et al. "Omniscient Video Super-Resolution." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/yi2021iccv-omniscient/) doi:10.1109/ICCV48922.2021.00439BibTeX
@inproceedings{yi2021iccv-omniscient,
title = {{Omniscient Video Super-Resolution}},
author = {Yi, Peng and Wang, Zhongyuan and Jiang, Kui and Jiang, Junjun and Lu, Tao and Tian, Xin and Ma, Jiayi},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {4429-4438},
doi = {10.1109/ICCV48922.2021.00439},
url = {https://mlanthology.org/iccv/2021/yi2021iccv-omniscient/}
}