A Noninformative Prior for Neural Networks
Abstract
While many implementations of Bayesian neural networks use large, complex hierarchical priors, in much of modern Bayesian statistics, noninformative (flat) priors are very common. This paper introduces a noninformative prior for feed-forward neural networks, describing several theoretical and practical advantages of this approach. In particular, a simpler prior allows for a simpler Markov chain Monte Carlo algorithm. Details of MCMC implementation are included.
Cite
Text
Lee. "A Noninformative Prior for Neural Networks." Machine Learning, 2003. doi:10.1023/A:1020258113913Markdown
[Lee. "A Noninformative Prior for Neural Networks." Machine Learning, 2003.](https://mlanthology.org/mlj/2003/lee2003mlj-noninformative/) doi:10.1023/A:1020258113913BibTeX
@article{lee2003mlj-noninformative,
title = {{A Noninformative Prior for Neural Networks}},
author = {Lee, Herbert K. H.},
journal = {Machine Learning},
year = {2003},
pages = {197-212},
doi = {10.1023/A:1020258113913},
volume = {50},
url = {https://mlanthology.org/mlj/2003/lee2003mlj-noninformative/}
}