Task Attended Meta-Learning for Few-Shot Learning
Abstract
Meta-learning (ML) has emerged as a promising direction in learning models under constrained resource settings like few-shot learning. The popular approaches for ML either learn a generalizable initial model or a generic parametric optimizer through episodic training. The former approaches leverage the knowledge from a batch of tasks to learn an optimal prior. In this work, we study the importance of tasks in a batch for ML. We hypothesize that the common assumption in batch episodic training where each task in a batch has an equal contribution to learning an optimal meta-model need not be true. We propose to weight the tasks in a batch according to their ``importance" in improving the meta-model's learning. To this end, we introduce a training curriculum, called task attended meta-training, to weight the tasks in a batch. The task attention is a standalone unit and can be integrated with any batch episodic training regimen. The comparisons of the task-attended ML models with their non-task-attended counterparts on complex datasets like miniImageNet, FC100 and tieredImageNet validate its effectiveness.
Cite
Text
Aimen et al. "Task Attended Meta-Learning for Few-Shot Learning." NeurIPS 2021 Workshops: MetaLearn, 2021.Markdown
[Aimen et al. "Task Attended Meta-Learning for Few-Shot Learning." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/aimen2021neuripsw-task/)BibTeX
@inproceedings{aimen2021neuripsw-task,
title = {{Task Attended Meta-Learning for Few-Shot Learning}},
author = {Aimen, Aroof and Sidheekh, Sahil and Ladrecha, Bharat and Krishnan, Narayanan Chatapuram},
booktitle = {NeurIPS 2021 Workshops: MetaLearn},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/aimen2021neuripsw-task/}
}