A Novel Topic-Level Random Walk Framework for Scene Image Co-Segmentation
Abstract
Image co-segmentation is popular with its ability to detour supervisory data by exploiting the common information in multiple images. In this paper, we aim at a more challenging branch called scene image co-segmentation, which jointly segments multiple images captured from the same scene into regions corresponding to their respective classes. We first put forward a novel representation named Visual Relation Network (VRN) to organize multiple segments, and then search for meaningful segments for every image through voting on the network. Scalable topic-level random walk is then used to solve the voting problem. Experiments on the benchmark MSRC-v2, the more difficult LabelMe and SUN datasets show the superiority over the state-of-the-art methods.
Cite
Text
Yuan et al. "A Novel Topic-Level Random Walk Framework for Scene Image Co-Segmentation." European Conference on Computer Vision, 2014. doi:10.1007/978-3-319-10590-1_45Markdown
[Yuan et al. "A Novel Topic-Level Random Walk Framework for Scene Image Co-Segmentation." European Conference on Computer Vision, 2014.](https://mlanthology.org/eccv/2014/yuan2014eccv-novel/) doi:10.1007/978-3-319-10590-1_45BibTeX
@inproceedings{yuan2014eccv-novel,
title = {{A Novel Topic-Level Random Walk Framework for Scene Image Co-Segmentation}},
author = {Yuan, Ze-Huan and Lu, Tong and Shivakumara, Palaiahnakote},
booktitle = {European Conference on Computer Vision},
year = {2014},
pages = {695-709},
doi = {10.1007/978-3-319-10590-1_45},
url = {https://mlanthology.org/eccv/2014/yuan2014eccv-novel/}
}