Flow-Guided Video Inpainting with Scene Templates

Abstract

We consider the problem of filling in missing spatio-temporal regions of a video. We provide a novel flow-based solution by introducing a generative model of images in relation to the scene (without missing regions) and mappings from the scene to images. We use the model to jointly infer the scene template, a 2D representation of the scene, and the mappings. This ensures consistency of the frame-to-frame flows generated to the underlying scene, reducing geometric distortions in flow-based inpainting. The template is mapped to the missing regions in the video by a new (L2-L1) interpolation scheme, creating crisp inpaintings, reducing common blur and distortion artifacts. We show on two benchmark datasets that our approach outperforms state-of-the-art quantitatively and in user studies.

Cite

Text

Lao et al. "Flow-Guided Video Inpainting with Scene Templates." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01433

Markdown

[Lao et al. "Flow-Guided Video Inpainting with Scene Templates." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/lao2021iccv-flowguided/) doi:10.1109/ICCV48922.2021.01433

BibTeX

@inproceedings{lao2021iccv-flowguided,
  title     = {{Flow-Guided Video Inpainting with Scene Templates}},
  author    = {Lao, Dong and Zhu, Peihao and Wonka, Peter and Sundaramoorthi, Ganesh},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {14599-14608},
  doi       = {10.1109/ICCV48922.2021.01433},
  url       = {https://mlanthology.org/iccv/2021/lao2021iccv-flowguided/}
}