Learning Some Popular Gaussian Graphical Models Without Condition Number Bounds

Abstract

Gaussian Graphical Models (GGMs) have wide-ranging applications in machine learning and the natural and social sciences. In most of the settings in which they are applied, the number of observed samples is much smaller than the dimension and they are assumed to be sparse. While there are a variety of algorithms (e.g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, to do so they require various assumptions on the well-conditioning of the precision matrix that are not information-theoretically necessary.

Cite

Text

Kelner et al. "Learning Some Popular Gaussian Graphical Models Without Condition Number Bounds." Neural Information Processing Systems, 2020.

Markdown

[Kelner et al. "Learning Some Popular Gaussian Graphical Models Without Condition Number Bounds." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/kelner2020neurips-learning/)

BibTeX

@inproceedings{kelner2020neurips-learning,
  title     = {{Learning Some Popular Gaussian Graphical Models Without Condition Number Bounds}},
  author    = {Kelner, Jonathan and Koehler, Frederic and Meka, Raghu and Moitra, Ankur},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/kelner2020neurips-learning/}
}