Task Agnostic Restoration of Natural Video Dynamics

Abstract

In many video restoration/translation tasks, image processing operations are naively extended to the video domain by processing each frame independently, disregarding the temporal connection of the video frames. This disregard for the temporal connection often leads to severe temporal inconsistencies. State-Of-The-Art (SOTA) techniques that address these inconsistencies rely on the availability of unprocessed videos to implicitly siphon and utilize consistent video dynamics to restore the temporal consistency of frame-wise processed videos which often jeopardizes the translation effect. We propose a general framework for this task that learns to infer and utilize consistent motion dynamics from inconsistent videos to mitigate the temporal flicker while preserving the perceptual quality for both the temporally neighboring and relatively distant frames without requiring the raw videos at test time. The proposed framework produces SOTA results on two benchmark datasets, DAVIS and videvo.net, processed by numerous image processing applications. The code and the trained models will be open-sourced upon acceptance.

Cite

Text

Ali et al. "Task Agnostic Restoration of Natural Video Dynamics." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01245

Markdown

[Ali et al. "Task Agnostic Restoration of Natural Video Dynamics." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/ali2023iccv-task/) doi:10.1109/ICCV51070.2023.01245

BibTeX

@inproceedings{ali2023iccv-task,
  title     = {{Task Agnostic Restoration of Natural Video Dynamics}},
  author    = {Ali, Muhammad Kashif and Kim, Dongjin and Kim, Tae Hyun},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {13534-13544},
  doi       = {10.1109/ICCV51070.2023.01245},
  url       = {https://mlanthology.org/iccv/2023/ali2023iccv-task/}
}