Generic Model Abstraction from Examples
Abstract
The recognition community has long avoided bridging the representational gap between traditional, low-level image features and generic models. Instead, the gap has been artificially eliminated by either bringing the image closer to the models, using simple scenes containing idealized, textureless objects,,or by bringing the models closer to the images, using 3-D CAD model templates or 2-D appearance model templates. In this paper, we attempt to bridge the representational gap for the domain of model acquisition. Specifically, we address the problem of automatically acquiring a generic 2-D view-based class model from a set of images, each containing an exemplar object belonging to that class. We introduce a novel graph-theoretical formulation of the problem, and demonstrate the approach on real imagery.
Cite
Text
Keselman and Dickinson. "Generic Model Abstraction from Examples." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2001. doi:10.1109/CVPR.2001.990574Markdown
[Keselman and Dickinson. "Generic Model Abstraction from Examples." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2001.](https://mlanthology.org/cvpr/2001/keselman2001cvpr-generic/) doi:10.1109/CVPR.2001.990574BibTeX
@inproceedings{keselman2001cvpr-generic,
title = {{Generic Model Abstraction from Examples}},
author = {Keselman, Yakov and Dickinson, Sven J.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2001},
pages = {I:856-863},
doi = {10.1109/CVPR.2001.990574},
url = {https://mlanthology.org/cvpr/2001/keselman2001cvpr-generic/}
}