Learning a Classification Model for Segmentation
Abstract
We propose a two-class classification model for grouping. Human segmented natural images are used as positive examples. Negative examples of grouping are constructed by randomly matching human segmentations and images. In a preprocessing stage an image is oversegmented into superpixels. We define a variety of features derived from the classical Gestalt cues, including contour, texture, brightness and good continuation. Information-theoretic analysis is applied to evaluate the power of these grouping cues. We train a linear classifier to combine these features. To demonstrate the power of the classification model, a simple algorithm is used to randomly search for good segmentations. Results are shown on a wide range of images. 1.
Cite
Text
Ren and Malik. "Learning a Classification Model for Segmentation." IEEE/CVF International Conference on Computer Vision, 2003. doi:10.1109/ICCV.2003.1238308Markdown
[Ren and Malik. "Learning a Classification Model for Segmentation." IEEE/CVF International Conference on Computer Vision, 2003.](https://mlanthology.org/iccv/2003/ren2003iccv-learning/) doi:10.1109/ICCV.2003.1238308BibTeX
@inproceedings{ren2003iccv-learning,
title = {{Learning a Classification Model for Segmentation}},
author = {Ren, Xiaofeng and Malik, Jitendra},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2003},
pages = {10-17},
doi = {10.1109/ICCV.2003.1238308},
url = {https://mlanthology.org/iccv/2003/ren2003iccv-learning/}
}