Meta-Learning Under Task Shift
Abstract
A common assumption in meta-learning is that meta-training and meta-test tasks are drawn from the same distribution. However, this assumption is often not fulfilled. Under such task shift, standard meta-learning algorithms do not work as desired since their unbiasedness is no longer maintained. In this paper, we propose a new meta-learning method called Importance Weighted Meta-Learning (IWML), which preserves unbiasedness even under task shift. Our approach uses both labeled meta-training datasets and unlabeled datasets in tasks obtained from the meta-test task distribution to assign weights to each meta-training task. These weights are determined by the ratio of meta-test and meta-training task densities. Our method enables the model to focus more on the meta-training tasks that closely align with meta-test tasks during the meta-training process. We meta-learn neural network-based models by minimizing the expected weighted meta-training error, which is an unbiased estimator of the expected error over meta-test tasks. The task density ratio is estimated using kernel density estimation, where the distance between tasks is measured by the maximum mean discrepancy. Our empirical evaluation of few-shot classification datasets demonstrates a significant improvement of IWML over existing approaches.
Cite
Text
Sun et al. "Meta-Learning Under Task Shift." Transactions on Machine Learning Research, 2024.Markdown
[Sun et al. "Meta-Learning Under Task Shift." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/sun2024tmlr-metalearning/)BibTeX
@article{sun2024tmlr-metalearning,
title = {{Meta-Learning Under Task Shift}},
author = {Sun, Lei and Tanaka, Yusuke and Iwata, Tomoharu},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/sun2024tmlr-metalearning/}
}