On the Concentration of Expectation and Approximate Inference in Layered Networks
Abstract
We present an analysis of concentration-of-expectation phenomena in layered Bayesian networks that use generalized linear models as the local conditional probabilities. This framework encompasses a wide variety of probability distributions, including both discrete and continuous random variables. We utilize ideas from large deviation analysis and the delta method to devise and evaluate a class of approximate inference algo- rithms for layered Bayesian networks that have superior asymptotic error bounds and very fast computation time.
Cite
Text
Nguyen and Jordan. "On the Concentration of Expectation and Approximate Inference in Layered Networks." Neural Information Processing Systems, 2003.Markdown
[Nguyen and Jordan. "On the Concentration of Expectation and Approximate Inference in Layered Networks." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/nguyen2003neurips-concentration/)BibTeX
@inproceedings{nguyen2003neurips-concentration,
title = {{On the Concentration of Expectation and Approximate Inference in Layered Networks}},
author = {Nguyen, Xuanlong and Jordan, Michael I.},
booktitle = {Neural Information Processing Systems},
year = {2003},
pages = {393-400},
url = {https://mlanthology.org/neurips/2003/nguyen2003neurips-concentration/}
}