Probabilistic Active Meta-Learning

Abstract

Data-efficient learning algorithms are essential in many practical applications where data collection is expensive, e.g., in robotics due to the wear and tear. To address this problem, meta-learning algorithms use prior experience about tasks to learn new, related tasks efficiently. Typically, a set of training tasks is assumed given or randomly chosen. However, this setting does not take into account the sequential nature that naturally arises when training a model from scratch in real-life: how do we collect a set of training tasks in a data-efficient manner? In this work, we introduce task selection based on prior experience into a meta-learning algorithm by conceptualizing the learner and the active meta-learning setting using a probabilistic latent variable model. We provide empirical evidence that our approach improves data-efficiency when compared to strong baselines on simulated robotic experiments.

Cite

Text

Kaddour et al. "Probabilistic Active Meta-Learning." Neural Information Processing Systems, 2020.

Markdown

[Kaddour et al. "Probabilistic Active Meta-Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/kaddour2020neurips-probabilistic/)

BibTeX

@inproceedings{kaddour2020neurips-probabilistic,
  title     = {{Probabilistic Active Meta-Learning}},
  author    = {Kaddour, Jean and Saemundsson, Steindor and Deisenroth, Marc},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/kaddour2020neurips-probabilistic/}
}