Adaptive Sparsity in Gaussian Graphical Models

Abstract

An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a non-informative Jeffreys’ hyperprior. We also naturally enforce the symmetry and positive-definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation.

Cite

Text

Wong et al. "Adaptive Sparsity in Gaussian Graphical Models." International Conference on Machine Learning, 2013.

Markdown

[Wong et al. "Adaptive Sparsity in Gaussian Graphical Models." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/wong2013icml-adaptive/)

BibTeX

@inproceedings{wong2013icml-adaptive,
  title     = {{Adaptive Sparsity in Gaussian Graphical Models}},
  author    = {Wong, Eleanor and Awate, Suyash and Fletcher, P. Thomas},
  booktitle = {International Conference on Machine Learning},
  year      = {2013},
  pages     = {311-319},
  volume    = {28},
  url       = {https://mlanthology.org/icml/2013/wong2013icml-adaptive/}
}