An Iterative Optimization Approach for Unified Image Segmentation and Matting

Abstract

Separating a foreground object from the background in a static image involves determining both full and partial pixel coverages, also known as extracting a matte. Previous approaches require the input image to be presegmented into three regions: foreground, background and unknown, which are called a trimap. Partial opacity values are then computed only for pixels inside the unknown region. This presegmentation based approach fails for images with large portions of semitransparent foreground where the trimap is difficult to create even manually. In this paper, we combine the segmentation and matting problem together and propose a unified optimization approach based on belief propagation. We iteratively estimate the opacity value for every pixel in the image, based on a small sample of foreground and background pixels marked by the user. Experimental results show that compared with previous approaches, our method is more efficient to extract high quality mattes for foregrounds with significant semitransparent regions

Cite

Text

Wang and Cohen. "An Iterative Optimization Approach for Unified Image Segmentation and Matting." IEEE/CVF International Conference on Computer Vision, 2005. doi:10.1109/ICCV.2005.37

Markdown

[Wang and Cohen. "An Iterative Optimization Approach for Unified Image Segmentation and Matting." IEEE/CVF International Conference on Computer Vision, 2005.](https://mlanthology.org/iccv/2005/wang2005iccv-iterative/) doi:10.1109/ICCV.2005.37

BibTeX

@inproceedings{wang2005iccv-iterative,
  title     = {{An Iterative Optimization Approach for Unified Image Segmentation and Matting}},
  author    = {Wang, Jue and Cohen, Michael F.},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2005},
  pages     = {936-943},
  doi       = {10.1109/ICCV.2005.37},
  url       = {https://mlanthology.org/iccv/2005/wang2005iccv-iterative/}
}