ST-MAML : A Stochastic-Task Based Method for Task-Heterogeneous Meta-Learning

Abstract

Optimization-based meta-learning typically assumes tasks are sampled from a single distribution - an assumption that oversimplifies and limits the diversity of tasks that meta-learning can model. Handling tasks from multiple distributions is challenging for meta-learning because it adds ambiguity to task identities. This paper proposes a novel method, ST-MAML, that empowers model-agnostic meta-learning (MAML) to learn from multiple task distributions. ST-MAML encodes tasks using a stochastic neural network module, that summarizes every task with a stochastic representation. The proposed Stochastic Task (ST) strategy learns a distribution of solutions for an ambiguous task and allows a meta-model to self-adapt to the current task. ST-MAML also propagates the task representation to enhance input variable encodings. Empirically, we demonstrate that ST-MAML outperforms the state-of-the-art on two few-shot image classification tasks, one curve regression benchmark, one image completion problem, and a real-world temperature prediction application.

Cite

Text

Wang et al. "ST-MAML : A Stochastic-Task Based Method for Task-Heterogeneous Meta-Learning." Uncertainty in Artificial Intelligence, 2022.

Markdown

[Wang et al. "ST-MAML : A Stochastic-Task Based Method for Task-Heterogeneous Meta-Learning." Uncertainty in Artificial Intelligence, 2022.](https://mlanthology.org/uai/2022/wang2022uai-stmaml/)

BibTeX

@inproceedings{wang2022uai-stmaml,
  title     = {{ST-MAML : A Stochastic-Task Based Method for Task-Heterogeneous Meta-Learning}},
  author    = {Wang, Zhe and Grigsby, Jake and Sekhon, Arshdeep and Qi, Yanjun},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2022},
  pages     = {2066-2074},
  volume    = {180},
  url       = {https://mlanthology.org/uai/2022/wang2022uai-stmaml/}
}