Learning Layered Motion Segmentation of Video

Abstract

We present an unsupervised approach for learning a generative layered representation of a scene from a video for motion segmentation. The learnt model is a composition of layers, which consist of one or more segments. Included in the model are the effects of image projection, lighting, and motion blur. The two main contributions of our method are: (i) a novel algorithm for obtaining the initial estimate of the model using efficient loopy belief propagation; and (ii) using /spl alpha//spl beta/-swap and /spl alpha/-expansion algorithms, which guarantee a strong local minima, for refining the initial estimate. Results are presented on several classes of objects with different types of camera motion. We compare our method with the state of the art and demonstrate significant improvements.

Cite

Text

Kumar et al. "Learning Layered Motion Segmentation of Video." IEEE/CVF International Conference on Computer Vision, 2005. doi:10.1109/ICCV.2005.138

Markdown

[Kumar et al. "Learning Layered Motion Segmentation of Video." IEEE/CVF International Conference on Computer Vision, 2005.](https://mlanthology.org/iccv/2005/kumar2005iccv-learning/) doi:10.1109/ICCV.2005.138

BibTeX

@inproceedings{kumar2005iccv-learning,
  title     = {{Learning Layered Motion Segmentation of Video}},
  author    = {Kumar, M. Pawan and Torr, Philip H. S. and Zisserman, Andrew},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2005},
  pages     = {33-40},
  doi       = {10.1109/ICCV.2005.138},
  url       = {https://mlanthology.org/iccv/2005/kumar2005iccv-learning/}
}