GLAD: Learning Sparse Graph Recovery
Abstract
Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an $\ell_1$ regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.
Cite
Text
Shrivastava et al. "GLAD: Learning Sparse Graph Recovery." International Conference on Learning Representations, 2020.Markdown
[Shrivastava et al. "GLAD: Learning Sparse Graph Recovery." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/shrivastava2020iclr-glad/)BibTeX
@inproceedings{shrivastava2020iclr-glad,
title = {{GLAD: Learning Sparse Graph Recovery}},
author = {Shrivastava, Harsh and Chen, Xinshi and Chen, Binghong and Lan, Guanghui and Aluru, Srinvas and Liu, Han and Song, Le},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/shrivastava2020iclr-glad/}
}