Exploiting Tractable Substructures in Intractable Networks
Abstract
We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory.
Cite
Text
Saul and Jordan. "Exploiting Tractable Substructures in Intractable Networks." Neural Information Processing Systems, 1995.Markdown
[Saul and Jordan. "Exploiting Tractable Substructures in Intractable Networks." Neural Information Processing Systems, 1995.](https://mlanthology.org/neurips/1995/saul1995neurips-exploiting/)BibTeX
@inproceedings{saul1995neurips-exploiting,
title = {{Exploiting Tractable Substructures in Intractable Networks}},
author = {Saul, Lawrence K. and Jordan, Michael I.},
booktitle = {Neural Information Processing Systems},
year = {1995},
pages = {486-492},
url = {https://mlanthology.org/neurips/1995/saul1995neurips-exploiting/}
}