Video Segmentation Based on Graphical Models
Abstract
This paper proposes a unified framework for spatiotemporal segmentation of video sequences. A Bayesian network is presented to model the interactions among the motion vector field, the intensity segmentation field, and the video segmentation field. The notions of distance transformation and Markov random field are used to express spatiotemporal constraints. Given consecutive frames, an optimization method is proposed to maximize the conditional probability density of the three fields in an iterative way. Experimental results show that the approach is robust and generates spatiotemporally coherent segmentation results.
Cite
Text
Wang et al. "Video Segmentation Based on Graphical Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2003. doi:10.1109/CVPR.2003.1211488Markdown
[Wang et al. "Video Segmentation Based on Graphical Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2003.](https://mlanthology.org/cvpr/2003/wang2003cvpr-video/) doi:10.1109/CVPR.2003.1211488BibTeX
@inproceedings{wang2003cvpr-video,
title = {{Video Segmentation Based on Graphical Models}},
author = {Wang, Yang and Tan, Tele and Loe, Kia-Fock},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2003},
pages = {335-342},
doi = {10.1109/CVPR.2003.1211488},
url = {https://mlanthology.org/cvpr/2003/wang2003cvpr-video/}
}