Learning Markov Structure by Maximum Entropy Relaxation
Abstract
We propose a new approach for learning a sparse graphical model approximation to a specified multivariate probability distribution (such as the empirical distribution of sample data). The selection of sparse graph structure arises naturally in our approach through solution of a convex optimization problem, which differentiates our method from standard combinatorial approaches. We seek the maximum entropy relaxation (MER) within an exponential family, which maximizes entropy subject to constraints that marginal distributions on small subsets of variables are close to the prescribed marginals in relative entropy. To solve MER, we present a modified primal-dual interior point method that exploits sparsity of the Fisher information matrix in models defined on chordal graphs. This leads to a tractable, scalable approach provided the level of relaxation in MER is sufficient to obtain a thin graph. The merits of our approach are investigated by recovering the structure of some simple graphical models from sample data.
Cite
Text
Johnson et al. "Learning Markov Structure by Maximum Entropy Relaxation." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.Markdown
[Johnson et al. "Learning Markov Structure by Maximum Entropy Relaxation." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/johnson2007aistats-learning/)BibTeX
@inproceedings{johnson2007aistats-learning,
title = {{Learning Markov Structure by Maximum Entropy Relaxation}},
author = {Johnson, Jason K. and Chandrasekaran, Venkat and Willsky, Alan S.},
booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
year = {2007},
pages = {203-210},
volume = {2},
url = {https://mlanthology.org/aistats/2007/johnson2007aistats-learning/}
}