A Nested Bi-Level Optimization Framework for Robust Few Shot Learning
Abstract
Model-Agnostic Meta-Learning (MAML), a popular gradient-based meta-learning framework, assumes that the contribution of each task or instance to the meta-learner is equal.Hence, it fails to address the domain shift between base and novel classes in few-shot learning. In this work, we propose a novel robust meta-learning algorithm, NESTEDMAML, which learns to assign weights to training tasks or instances. We con-sider weights as hyper-parameters and iteratively optimize them using a small set of validation tasks set in a nested bi-level optimization approach (in contrast to the standard bi-level optimization in MAML). We then applyNESTED-MAMLin the meta-training stage, which involves (1) several tasks sampled from a distribution different from the meta-test task distribution, or (2) some data samples with noisy labels.Extensive experiments on synthetic and real-world datasets demonstrate that NESTEDMAML efficiently mitigates the effects of ”unwanted” tasks or instances, leading to significant improvement over the state-of-the-art robust meta-learning methods.
Cite
Text
Killamsetty et al. "A Nested Bi-Level Optimization Framework for Robust Few Shot Learning." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I7.20678Markdown
[Killamsetty et al. "A Nested Bi-Level Optimization Framework for Robust Few Shot Learning." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/killamsetty2022aaai-nested/) doi:10.1609/AAAI.V36I7.20678BibTeX
@inproceedings{killamsetty2022aaai-nested,
title = {{A Nested Bi-Level Optimization Framework for Robust Few Shot Learning}},
author = {Killamsetty, KrishnaTeja and Li, Changbin and Zhao, Chen and Chen, Feng and Iyer, Rishabh K.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {7176-7184},
doi = {10.1609/AAAI.V36I7.20678},
url = {https://mlanthology.org/aaai/2022/killamsetty2022aaai-nested/}
}