Video Inpainting for Arbitrary Foreground Object Removal
Abstract
In this paper, we propose a robust video inpainting method under challenging background conditions such as occlusion, complex visual pattern, overlaid object clutter and depth variation observed in a moving camera. We propose a confidence score based on the normalized difference between observed depth of potential background point and predicted distance in 3D space. Potential points from neighbor frames are collected, refined, and weighted to choose small number of qualified observations to fill in the region of removed object in the current frame. Our method is evaluated with both public dataset and our own video clips and compared to multiple state of the art video inpainting methods showing outperforming performance.
Cite
Text
Siddique and Lee. "Video Inpainting for Arbitrary Foreground Object Removal." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00195Markdown
[Siddique and Lee. "Video Inpainting for Arbitrary Foreground Object Removal." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/siddique2018wacv-video/) doi:10.1109/WACV.2018.00195BibTeX
@inproceedings{siddique2018wacv-video,
title = {{Video Inpainting for Arbitrary Foreground Object Removal}},
author = {Siddique, Ashraf and Lee, Seungkyu},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2018},
pages = {1755-1763},
doi = {10.1109/WACV.2018.00195},
url = {https://mlanthology.org/wacv/2018/siddique2018wacv-video/}
}