Revisiting Temporal Alignment for Video Restoration

Abstract

Long-range temporal alignment is critical yet challenging for video restoration tasks. Recently, some works attempt to divide the long-range alignment into several sub-alignments and handle them progressively. Although this operation is helpful in modeling distant correspondences, error accumulation is inevitable due to the propagation mechanism. In this work, we present a novel, generic iterative alignment module which employs a gradual refinement scheme for sub-alignments, yielding more accurate motion compensation. To further enhance the alignment accuracy and temporal consistency, we develop a non-parametric re-weighting method, where the importance of each neighboring frame is adaptively evaluated in a spatial-wise way for aggregation. By virtue of the proposed strategies, our model achieves state-of-the-art performance on multiple benchmarks across a range of video restoration tasks including video super-resolution, denoising and deblurring.

Cite

Text

Zhou et al. "Revisiting Temporal Alignment for Video Restoration." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00596

Markdown

[Zhou et al. "Revisiting Temporal Alignment for Video Restoration." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhou2022cvpr-revisiting/) doi:10.1109/CVPR52688.2022.00596

BibTeX

@inproceedings{zhou2022cvpr-revisiting,
  title     = {{Revisiting Temporal Alignment for Video Restoration}},
  author    = {Zhou, Kun and Li, Wenbo and Lu, Liying and Han, Xiaoguang and Lu, Jiangbo},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {6053-6062},
  doi       = {10.1109/CVPR52688.2022.00596},
  url       = {https://mlanthology.org/cvpr/2022/zhou2022cvpr-revisiting/}
}