Bayesian Leave-One-Out Cross-Validation for Large Data

Abstract

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO) is a general approach for assessing the generalizability of a model, but unfortunately, LOO does not scale well to large datasets. We propose a combination of using approximate inference techniques and probability-proportional-to-size-sampling (PPS) for fast LOO model evaluation for large datasets. We provide both theoretical and empirical results showing good properties for large data.

Cite

Text

Magnusson et al. "Bayesian Leave-One-Out Cross-Validation for Large Data." International Conference on Machine Learning, 2019.

Markdown

[Magnusson et al. "Bayesian Leave-One-Out Cross-Validation for Large Data." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/magnusson2019icml-bayesian/)

BibTeX

@inproceedings{magnusson2019icml-bayesian,
  title     = {{Bayesian Leave-One-Out Cross-Validation for Large Data}},
  author    = {Magnusson, Måns and Andersen, Michael and Jonasson, Johan and Vehtari, Aki},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {4244-4253},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/magnusson2019icml-bayesian/}
}