Covariate Distribution Aware Meta-Learning

Abstract

Meta-learning has proven to be successful at few-shot learning across the regression, classification and reinforcement learning paradigms. Recent approaches have adopted Bayesian interpretations to improve gradient based meta-learners by quantifying the uncertainty of the post-adaptation estimates. Most of these works almost completely ignore the latent relationship between the covariate distribution (p(x)) of a task and the corresponding conditional distribution p(y|x). In this paper, we identify the need to explicitly model the meta-distribution over the task covariates in a hierarchical Bayesian framework. We begin by introducing a graphical model that explicitly leverages very few samples drawn from p(x) to better infer the posterior over the optimal parameters of the conditional distribution (p(y|x)) for each task. Based on this model we provide an inference strategy and a corresponding meta-algorithm that explicitly accounts for the meta-distribution over task covariates. Finally, we demonstrate the significant gains of our proposed algorithm on a synthetic regression dataset.

Cite

Text

Setlur et al. "Covariate Distribution Aware Meta-Learning." ICML 2020 Workshops: LifelongML, 2020.

Markdown

[Setlur et al. "Covariate Distribution Aware Meta-Learning." ICML 2020 Workshops: LifelongML, 2020.](https://mlanthology.org/icmlw/2020/setlur2020icmlw-covariate/)

BibTeX

@inproceedings{setlur2020icmlw-covariate,
  title     = {{Covariate Distribution Aware Meta-Learning}},
  author    = {Setlur, Amrith and Dingliwal, Saket and Poczos, Barnabas},
  booktitle = {ICML 2020 Workshops: LifelongML},
  year      = {2020},
  url       = {https://mlanthology.org/icmlw/2020/setlur2020icmlw-covariate/}
}