Information Theoretic Meta Learning with Gaussian Processes

Abstract

We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck. The idea is to learn a stochastic representation or encoding of the task description, given by a training set, that is highly informative about predicting the validation set. By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning. This framework unifies existing gradient-based algorithms and also allows us to derive new algorithms. In particular, we develop a memory-based algorithm that uses Gaussian processes to obtain non-parametric encoding representations. We demonstrate our method on a few-shot regression problem and on four few-shot classification problems, obtaining competitive accuracy when compared to existing baselines.

Cite

Text

Titsias et al. "Information Theoretic Meta Learning with Gaussian Processes." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Titsias et al. "Information Theoretic Meta Learning with Gaussian Processes." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/titsias2021uai-information/)

BibTeX

@inproceedings{titsias2021uai-information,
  title     = {{Information Theoretic Meta Learning with Gaussian Processes}},
  author    = {Titsias, Michalis K. and Ruiz, Francisco J. R. and Nikoloutsopoulos, Sotirios and Galashov, Alexandre},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {1597-1606},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/titsias2021uai-information/}
}