Adversarial Task Up-Sampling for Meta-Learning

Abstract

The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks. Frequent violation of the assumption in applications with either insufficient tasks or a very narrow meta-training task distribution leads to memorization or learner overfitting. Recent solutions have pursued augmentation of meta-training tasks, while it is still an open question to generate both correct and sufficiently imaginary tasks. In this paper, we seek an approach that up-samples meta-training tasks from the task representation via a task up-sampling network. Besides, the resulting approach named Adversarial Task Up-sampling (ATU) suffices to generate tasks that can maximally contribute to the latest meta-learner by maximizing an adversarial loss. On few-shot sine regression and image classification datasets, we empirically validate the marked improvement of ATU over state-of-the-art task augmentation strategies in the meta-testing performance and also the quality of up-sampled tasks.

Cite

Text

Wu et al. "Adversarial Task Up-Sampling for Meta-Learning." Neural Information Processing Systems, 2022.

Markdown

[Wu et al. "Adversarial Task Up-Sampling for Meta-Learning." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/wu2022neurips-adversarial/)

BibTeX

@inproceedings{wu2022neurips-adversarial,
  title     = {{Adversarial Task Up-Sampling for Meta-Learning}},
  author    = {Wu, Yichen and Huang, Long-Kai and Wei, Ying},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/wu2022neurips-adversarial/}
}