Predictive Complexity Priors

Abstract

Specifying a Bayesian prior is notoriously difficult for complex models such as neural networks. Reasoning about parameters is made challenging by the high-dimensionality and over-parameterization of the space. Priors that seem benign and uninformative can have unintuitive and detrimental effects on a model’s predictions. For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model’s predictions to those of a reference model. Although originally defined on the model outputs, we transfer the prior to the model parameters via a change of variables. The traditional Bayesian workflow can then proceed as usual. We apply our predictive complexity prior to high-dimensional regression, reasoning over neural network depth, and sharing of statistical strength for few-shot learning.

Cite

Text

Nalisnick et al. "Predictive Complexity Priors." Artificial Intelligence and Statistics, 2021.

Markdown

[Nalisnick et al. "Predictive Complexity Priors." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/nalisnick2021aistats-predictive/)

BibTeX

@inproceedings{nalisnick2021aistats-predictive,
  title     = {{Predictive Complexity Priors}},
  author    = {Nalisnick, Eric and Gordon, Jonathan and Miguel Hernandez-Lobato, Jose},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {694-702},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/nalisnick2021aistats-predictive/}
}