A Meta-Learning Approach for Custom Model Training
Abstract
Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks. In few-class, few-shot target task settings (i.e. when there are only a few classes and training examples available in the target task), meta-learning approaches that optimize for future task learning have outperformed the typical transfer approach of initializing model weights from a pretrained starting point. But as we experimentally show, metalearning algorithms that work well in the few-class setting do not generalize well in many-shot and many-class cases. In this paper, we propose a joint training approach that combines both transfer-learning and meta-learning. Benefiting from the advantages of each, our method obtains improved generalization performance on unseen target tasks in both few- and many-class and few- and many-shot scenarios.
Cite
Text
Eshratifar et al. "A Meta-Learning Approach for Custom Model Training." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33019937Markdown
[Eshratifar et al. "A Meta-Learning Approach for Custom Model Training." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/eshratifar2019aaai-meta/) doi:10.1609/AAAI.V33I01.33019937BibTeX
@inproceedings{eshratifar2019aaai-meta,
title = {{A Meta-Learning Approach for Custom Model Training}},
author = {Eshratifar, Amir Erfan and Abrishami, Mohammad Saeed and Eigen, David and Pedram, Massoud},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {9937-9938},
doi = {10.1609/AAAI.V33I01.33019937},
url = {https://mlanthology.org/aaai/2019/eshratifar2019aaai-meta/}
}