Posterior Consistency of the Silverman G-Prior in Bayesian Model Choice

Abstract

Kernel supervised learning methods can be unified by utilizing the tools from regularization theory. The duality between regularization and prior leads to interpreting regularization methods in terms of maximum a posteriori estimation and has motivated Bayesian interpretations of kernel methods. In this paper we pursue a Bayesian interpretation of sparsity in the kernel setting by making use of a mixture of a point-mass distribution and prior that we refer to as ``Silverman's g-prior.'' We provide a theoretical analysis of the posterior consistency of a Bayesian model choice procedure based on this prior. We also establish the asymptotic relationship between this procedure and the Bayesian information criterion.

Cite

Text

Zhang et al. "Posterior Consistency of the Silverman G-Prior in Bayesian Model Choice." Neural Information Processing Systems, 2008.

Markdown

[Zhang et al. "Posterior Consistency of the Silverman G-Prior in Bayesian Model Choice." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/zhang2008neurips-posterior/)

BibTeX

@inproceedings{zhang2008neurips-posterior,
  title     = {{Posterior Consistency of the Silverman G-Prior in Bayesian Model Choice}},
  author    = {Zhang, Zhihua and Jordan, Michael I. and Yeung, Dit-Yan},
  booktitle = {Neural Information Processing Systems},
  year      = {2008},
  pages     = {1969-1976},
  url       = {https://mlanthology.org/neurips/2008/zhang2008neurips-posterior/}
}