Self-Training for Few-Shot Transfer Across Extreme Task Differences
Abstract
Most few-shot learning techniques are pre-trained on a large, labeled “base dataset”. In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray, satellite images), one must resort to pre-training in a different “source” problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on the challenging BSCD-FSL benchmark consisting of datasets from multiple domains.
Cite
Text
Phoo and Hariharan. "Self-Training for Few-Shot Transfer Across Extreme Task Differences." International Conference on Learning Representations, 2021.Markdown
[Phoo and Hariharan. "Self-Training for Few-Shot Transfer Across Extreme Task Differences." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/phoo2021iclr-selftraining/)BibTeX
@inproceedings{phoo2021iclr-selftraining,
title = {{Self-Training for Few-Shot Transfer Across Extreme Task Differences}},
author = {Phoo, Cheng Perng and Hariharan, Bharath},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/phoo2021iclr-selftraining/}
}