Automatic Labeling of Data for Transfer Learning
Abstract
Transfer learning uses trained weights from a source model as the initial weights for the training of a target dataset. A well chosen source with a large number of labeled data leads to significant improvement in accuracy. We demonstrate a technique that automatically labels large unlabeled datasets so that they can train source models for transfer learning. We experimentally evaluate this method, using a baseline dataset of human-annotated ImageNet1K labels, against five variations of this technique. We show that the performance of these automatically trained models come within 6% of baseline.
Cite
Text
Dube et al. "Automatic Labeling of Data for Transfer Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.Markdown
[Dube et al. "Automatic Labeling of Data for Transfer Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/dube2019cvprw-automatic/)BibTeX
@inproceedings{dube2019cvprw-automatic,
title = {{Automatic Labeling of Data for Transfer Learning}},
author = {Dube, Parijat and Bhattacharjee, Bishwaranjan and Huo, Siyu and Watson, Patrick and Belgodere, Brian M.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {122-129},
url = {https://mlanthology.org/cvprw/2019/dube2019cvprw-automatic/}
}