Sharing Knowledge for Meta-Learning with Feature Descriptions
Abstract
Language is an important tool for humans to share knowledge. We propose a meta-learning method that shares knowledge across supervised learning tasks using feature descriptions written in natural language, which have not been used in the existing meta-learning methods. The proposed method improves the predictive performance on unseen tasks with a limited number of labeled data by meta-learning from various tasks. With the feature descriptions, we can find relationships across tasks even when their feature spaces are different. The feature descriptions are encoded using a language model pretrained with a large corpus, which enables us to incorporate human knowledge stored in the corpus into meta-learning. In our experiments, we demonstrate that the proposed method achieves better predictive performance than the existing meta-learning methods using a wide variety of real-world datasets provided by the statistical office of the EU and Japan.
Cite
Text
Iwata and Kumagai. "Sharing Knowledge for Meta-Learning with Feature Descriptions." Neural Information Processing Systems, 2022.Markdown
[Iwata and Kumagai. "Sharing Knowledge for Meta-Learning with Feature Descriptions." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/iwata2022neurips-sharing/)BibTeX
@inproceedings{iwata2022neurips-sharing,
title = {{Sharing Knowledge for Meta-Learning with Feature Descriptions}},
author = {Iwata, Tomoharu and Kumagai, Atsutoshi},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/iwata2022neurips-sharing/}
}