GrabCut in One Cut

Abstract

Among image segmentation algorithms there are two major groups: (a) methods assuming known appearance models and (b) methods estimating appearance models jointly with segmentation. Typically, the first group optimizes appearance log-likelihoods in combination with some spacial regularization. This problem is relatively simple and many methods guarantee globally optimal results. The second group treats model parameters as additional variables transforming simple segmentation energies into highorder NP-hard functionals (Zhu-Yuille, Chan-Vese, GrabCut, etc). It is known that such methods indirectly minimize the appearance overlap between the segments. We propose a new energy term explicitly measuring L 1 distance between the object and background appearance models that can be globally maximized in one graph cut. We show that in many applications our simple term makes NP-hard segmentation functionals unnecessary. Our one cut algorithm effectively replaces approximate iterative optimization techniques based on block coordinate descent.

Cite

Text

Tang et al. "GrabCut in One Cut." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.222

Markdown

[Tang et al. "GrabCut in One Cut." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/tang2013iccv-grabcut/) doi:10.1109/ICCV.2013.222

BibTeX

@inproceedings{tang2013iccv-grabcut,
  title     = {{GrabCut in One Cut}},
  author    = {Tang, Meng and Gorelick, Lena and Veksler, Olga and Boykov, Yuri},
  booktitle = {International Conference on Computer Vision},
  year      = {2013},
  doi       = {10.1109/ICCV.2013.222},
  url       = {https://mlanthology.org/iccv/2013/tang2013iccv-grabcut/}
}