Relating Things and Stuff by High-Order Potential Modeling
Abstract
In the last few years, substantially different approaches have been adopted for segmenting and detecting “things” (object categories that have a well defined shape such as people and cars) and “stuff” (object categories which have an amorphous spatial extent such as grass and sky). This paper proposes a framework for scene understanding that relates both things and stuff by using a novel way of modeling high order potentials. This representation allows us to enforce labelling consistency between hypotheses of detected objects (things) and image segments (stuff) in a single graphical model. We show that an efficient graph-cut algorithm can be used to perform maximum a posteriori (MAP) inference in this model. We evaluate our method on the Stanford dataset [1] by comparing it against state-of-the-art methods for object segmentation and detection.
Cite
Text
Kim et al. "Relating Things and Stuff by High-Order Potential Modeling." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33885-4_30Markdown
[Kim et al. "Relating Things and Stuff by High-Order Potential Modeling." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/kim2012eccv-relating/) doi:10.1007/978-3-642-33885-4_30BibTeX
@inproceedings{kim2012eccv-relating,
title = {{Relating Things and Stuff by High-Order Potential Modeling}},
author = {Kim, Byung-soo and Sun, Min and Kohli, Pushmeet and Savarese, Silvio},
booktitle = {European Conference on Computer Vision},
year = {2012},
pages = {293-304},
doi = {10.1007/978-3-642-33885-4_30},
url = {https://mlanthology.org/eccv/2012/kim2012eccv-relating/}
}