Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block Decomposition

Abstract

This paper studies the problem of learning the large-scale Gaussian graphical models that are multivariate totally positive of order two ($\text{MTP}_2$). By introducing the concept of bridge, which commonly exists in large-scale sparse graphs, we show that the entire problem can be equivalently optimized through (1) several smaller-scaled sub-problems induced by a \emph{bridge-block decomposition} on the thresholded sample covariance graph and (2) a set of explicit solutions on entries corresponding to \emph{bridges}. From practical aspect, this simple and provable discipline can be applied to break down a large problem into small tractable ones, leading to enormous reduction on the computational complexity and substantial improvements for all existing algorithms. The synthetic and real-world experiments demonstrate that our proposed method presents a significant speed-up compared to the state-of-the-art benchmarks.

Cite

Text

Wang et al. "Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block Decomposition." Neural Information Processing Systems, 2023.

Markdown

[Wang et al. "Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block Decomposition." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wang2023neurips-learning-d/)

BibTeX

@inproceedings{wang2023neurips-learning-d,
  title     = {{Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block Decomposition}},
  author    = {Wang, Xiwen and Ying, Jiaxi and Palomar, Daniel},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/wang2023neurips-learning-d/}
}