Video Matting Using Multi-Frame Nonlocal Matting Laplacian
Abstract
We present an algorithm for extracting high quality temporally coherent alpha mattes of objects from a video. Our approach extends the conventional image matting approach, i.e. closed-form matting, to video by using multi-frame nonlocal matting Laplacian. Our multi-frame nonlocal matting Laplacian is defined over a nonlocal neighborhood in spatial temporal domain, and it solves the alpha mattes of several video frames all together simultaneously. To speed up computation and to reduce memory requirement for solving the multi-frame nonlocal matting Laplacian, we use the approximate nearest neighbor(ANN) to find the nonlocal neighborhood and the k-d tree implementation to divide the nonlocal matting Laplacian into several smaller linear systems. Finally, we adopt the nonlocal mean regularization to enhance temporal coherence of the estimated alpha mattes and to correct alpha matte errors at low contrast regions. We demonstrate the effectiveness of our approach on various examples with qualitative comparisons to the results from previous matting algorithms.
Cite
Text
Choi et al. "Video Matting Using Multi-Frame Nonlocal Matting Laplacian." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33783-3_39Markdown
[Choi et al. "Video Matting Using Multi-Frame Nonlocal Matting Laplacian." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/choi2012eccv-video/) doi:10.1007/978-3-642-33783-3_39BibTeX
@inproceedings{choi2012eccv-video,
title = {{Video Matting Using Multi-Frame Nonlocal Matting Laplacian}},
author = {Choi, Inchang and Lee, Minhaeng and Tai, Yu-Wing},
booktitle = {European Conference on Computer Vision},
year = {2012},
pages = {540-553},
doi = {10.1007/978-3-642-33783-3_39},
url = {https://mlanthology.org/eccv/2012/choi2012eccv-video/}
}