Scalable Inference of Sparsely-Changing Gaussian Markov Random Fields
Abstract
We study the problem of inferring time-varying Gaussian Markov random fields, where the underlying graphical model is both sparse and changes sparsely over time. Most of the existing methods for the inference of time-varying Markov random fields (MRFs) rely on the \textit{regularized maximum likelihood estimation} (MLE), that typically suffer from weak statistical guarantees and high computational time. Instead, we introduce a new class of constrained optimization problems for the inference of sparsely-changing Gaussian MRFs (GMRFs). The proposed optimization problem is formulated based on the exact $\ell_0$ regularization, and can be solved in near-linear time and memory. Moreover, we show that the proposed estimator enjoys a provably small estimation error. We derive sharp statistical guarantees in the high-dimensional regime, showing that such problems can be learned with as few as one sample per time period. Our proposed method is extremely efficient in practice: it can accurately estimate sparsely-changing GMRFs with more than 500 million variables in less than one hour.
Cite
Text
Fattahi and Gomez. "Scalable Inference of Sparsely-Changing Gaussian Markov Random Fields." Neural Information Processing Systems, 2021.Markdown
[Fattahi and Gomez. "Scalable Inference of Sparsely-Changing Gaussian Markov Random Fields." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/fattahi2021neurips-scalable/)BibTeX
@inproceedings{fattahi2021neurips-scalable,
title = {{Scalable Inference of Sparsely-Changing Gaussian Markov Random Fields}},
author = {Fattahi, Salar and Gomez, Andres},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/fattahi2021neurips-scalable/}
}