A Shape-Based Object Class Model for Knowledge Transfer
Abstract
Object class models trained on hundreds or thousands of images have shown to enable robust detection. Transferring knowledge from such models to new object classes trained from a few or even as little as one training instance however is still in its infancy. This paper designs a shape-based model that allows to easily and explicitly transfer knowledge on three different levels: transfer of individual parts' shape and appearance information, transfer of local symmetry between parts, and transfer of part topology. Due to the factorized form of the model, knowledge can either be transferred for the complete model or just partial knowledge corresponding to certain aspects of the model. The experiments clearly demonstrate that the proposed model is competitive with the state-of-the-art and enables both full and partial knowledge transfer.
Cite
Text
Stark et al. "A Shape-Based Object Class Model for Knowledge Transfer." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459231Markdown
[Stark et al. "A Shape-Based Object Class Model for Knowledge Transfer." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/stark2009iccv-shape/) doi:10.1109/ICCV.2009.5459231BibTeX
@inproceedings{stark2009iccv-shape,
title = {{A Shape-Based Object Class Model for Knowledge Transfer}},
author = {Stark, Michael and Goesele, Michael and Schiele, Bernt},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2009},
pages = {373-380},
doi = {10.1109/ICCV.2009.5459231},
url = {https://mlanthology.org/iccv/2009/stark2009iccv-shape/}
}