Consistent MetaReg: Alleviating Intra-Task Discrepancy for Better Meta-Knowledge
Abstract
In the few-shot learning scenario, the data-distribution discrepancy between training data and test data in a task usually exists due to the limited data. However, most existing meta-learning approaches seldom consider this intra-task discrepancy in the meta-training phase which might deteriorate the performance. To overcome this limitation, we develop a new consistent meta-regularization method to reduce the intra-task data-distribution discrepancy. Moreover, the proposed meta-regularization method could be readily inserted into existing optimization-based meta-learning models to learn better meta-knowledge. Particularly, we provide the theoretical analysis to prove that using the proposed meta-regularization, the conventional gradient-based meta-learning method can reach the lower regret bound. The extensive experiments also demonstrate the effectiveness of our method, which indeed improves the performances of the state-of-the-art gradient-based meta-learning models in the few-shot classification task.
Cite
Text
Tian et al. "Consistent MetaReg: Alleviating Intra-Task Discrepancy for Better Meta-Knowledge." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/377Markdown
[Tian et al. "Consistent MetaReg: Alleviating Intra-Task Discrepancy for Better Meta-Knowledge." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/tian2020ijcai-consistent/) doi:10.24963/IJCAI.2020/377BibTeX
@inproceedings{tian2020ijcai-consistent,
title = {{Consistent MetaReg: Alleviating Intra-Task Discrepancy for Better Meta-Knowledge}},
author = {Tian, Pinzhuo and Qi, Lei and Dong, Shaokang and Shi, Yinghuan and Gao, Yang},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {2718-2724},
doi = {10.24963/IJCAI.2020/377},
url = {https://mlanthology.org/ijcai/2020/tian2020ijcai-consistent/}
}