Motion-Aware KNN Laplacian for Video Matting

Abstract

This paper demonstrates how the nonlocal principle benefits video matting via the KNN Laplacian, which comes with a straightforward implementation using motionaware K nearest neighbors. In hindsight, the fundamental problem to solve in video matting is to produce spatiotemporally coherent clusters of moving foreground pixels. When used as described, the motion-aware KNN Laplacian is effective in addressing this fundamental problem, as demonstrated by sparse user markups typically on only one frame in a variety of challenging examples featuring ambiguous foreground and background colors, changing topologies with disocclusion, significant illumination changes, fast motion, and motion blur. When working with existing Laplacian-based systems, our Laplacian is expected to benefit them immediately with improved clustering of moving foreground pixels.

Cite

Text

Li et al. "Motion-Aware KNN Laplacian for Video Matting." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.447

Markdown

[Li et al. "Motion-Aware KNN Laplacian for Video Matting." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/li2013iccv-motionaware/) doi:10.1109/ICCV.2013.447

BibTeX

@inproceedings{li2013iccv-motionaware,
  title     = {{Motion-Aware KNN Laplacian for Video Matting}},
  author    = {Li, Dingzeyu and Chen, Qifeng and Tang, Chi-Keung},
  booktitle = {International Conference on Computer Vision},
  year      = {2013},
  doi       = {10.1109/ICCV.2013.447},
  url       = {https://mlanthology.org/iccv/2013/li2013iccv-motionaware/}
}