Adaptive Compositional Continual Meta-Learning
Abstract
This paper focuses on continual meta-learning, where few-shot tasks are heterogeneous and sequentially available. Recent works use a mixture model for meta-knowledge to deal with the heterogeneity. However, these methods suffer from parameter inefficiency caused by two reasons: (1) the underlying assumption of mutual exclusiveness among mixture components hinders sharing meta-knowledge across heterogeneous tasks. (2) they only allow increasing mixture components and cannot adaptively filter out redundant components. In this paper, we propose an Adaptive Compositional Continual Meta-Learning (ACML) algorithm, which employs a compositional premise to associate a task with a subset of mixture components, allowing meta-knowledge sharing among heterogeneous tasks. Moreover, to adaptively adjust the number of mixture components, we propose a component sparsification method based on evidential theory to filter out redundant components. Experimental results show ACML outperforms strong baselines, showing the effectiveness of our compositional meta-knowledge, and confirming that ACML can adaptively learn meta-knowledge.
Cite
Text
Wu et al. "Adaptive Compositional Continual Meta-Learning." International Conference on Machine Learning, 2023.Markdown
[Wu et al. "Adaptive Compositional Continual Meta-Learning." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/wu2023icml-adaptive/)BibTeX
@inproceedings{wu2023icml-adaptive,
title = {{Adaptive Compositional Continual Meta-Learning}},
author = {Wu, Bin and Fang, Jinyuan and Zeng, Xiangxiang and Liang, Shangsong and Zhang, Qiang},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {37358-37378},
volume = {202},
url = {https://mlanthology.org/icml/2023/wu2023icml-adaptive/}
}