Transferable Meta Learning Across Domains
Abstract
Meta learning algorithms are effective at obtaining meta models with the capability of solving new tasks quickly. However, they critically require sufficient tasks for meta model training and the resulted model can only solve new tasks similar to the training ones. These limitations make them suffer performance decline in presence of insufficiency of training tasks in target domains and task heterogeneity—the source (model training) tasks presents different characteristics from target (model application) tasks. To overcome these two significant limitations of existing meta learning algorithms, we introduce the cross-domain meta learning framework and propose a new transferable meta learning (TML) algorithm. TML performs meta task adaptation jointly with meta model learning, which effectively narrows divergence between source and target tasks and enables transferring source meta-knowledge to solve target tasks. Thus, the resulted transferable meta model can solve new learning tasks in new domains quickly. We apply the proposed TML to cross-domain few-shot classification problems and evaluate its performance on multiple benchmarks. It performs significantly better and faster than well-established meta learning algorithms and fine-tuned domain-adapted models.
Cite
Text
Kang and Feng. "Transferable Meta Learning Across Domains." Conference on Uncertainty in Artificial Intelligence, 2018.Markdown
[Kang and Feng. "Transferable Meta Learning Across Domains." Conference on Uncertainty in Artificial Intelligence, 2018.](https://mlanthology.org/uai/2018/kang2018uai-transferable/)BibTeX
@inproceedings{kang2018uai-transferable,
title = {{Transferable Meta Learning Across Domains}},
author = {Kang, Bingyi and Feng, Jiashi},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2018},
pages = {177-187},
url = {https://mlanthology.org/uai/2018/kang2018uai-transferable/}
}