Multiview Co-Segmentation for Wide Baseline Images Using Cross-View Supervision
Abstract
This paper presents a method to co-segment an object from wide baseline multiview images using cross-view self-supervision. A key challenge in the wide baseline images lies in the fragility of photometric matching. Inspired by shape-from-silhouette that does not require photometric matching, we formulate a new theory of shape belief transfer---the segmentation belief in one image can be used to predict that of the other image through epipolar geometry. This formulation is differentiable, and therefore, an end-to-end training is possible. We analyze the shape belief transfer to identify the theoretical upper and lower bounds of the unlabeled data segmentation, which characterizes the degenerate cases of co-segmentation. We design a novel triple network that embeds this shape belief transfer, which is agnostic to visual appearance and baseline. The resulting network is validated by recognizing a target object from realworld visual data including non-human species and a subject of interest in social videos where attaining large-scale annotated data is challenging.
Cite
Text
Yao and Park. "Multiview Co-Segmentation for Wide Baseline Images Using Cross-View Supervision." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Yao and Park. "Multiview Co-Segmentation for Wide Baseline Images Using Cross-View Supervision." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/yao2020wacv-multiview/)BibTeX
@inproceedings{yao2020wacv-multiview,
title = {{Multiview Co-Segmentation for Wide Baseline Images Using Cross-View Supervision}},
author = {Yao, Yuan and Park, Hyun Soo},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/yao2020wacv-multiview/}
}