Nonconvex Sparse Graph Learning Under Laplacian Constrained Graphical Model
Abstract
In this paper, we consider the problem of learning a sparse graph from the Laplacian constrained Gaussian graphical model. This problem can be formulated as a penalized maximum likelihood estimation of the precision matrix under Laplacian structural constraints. Like in the classical graphical lasso problem, recent works made use of the $\ell_1$-norm with the goal of promoting sparsity in the Laplacian constrained precision matrix estimation. However, through empirical evidence, we observe that the $\ell_1$-norm is not effective in imposing a sparse solution in this problem. From a theoretical perspective, we prove that a large regularization parameter will surprisingly lead to a solution representing a fully connected graph instead of a sparse graph. To address this issue, we propose a nonconvex penalized maximum likelihood estimation method, and establish the order of the statistical error. Numerical experiments involving synthetic and real-world data sets demonstrate the effectiveness of the proposed method.
Cite
Text
Ying et al. "Nonconvex Sparse Graph Learning Under Laplacian Constrained Graphical Model." Neural Information Processing Systems, 2020.Markdown
[Ying et al. "Nonconvex Sparse Graph Learning Under Laplacian Constrained Graphical Model." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/ying2020neurips-nonconvex/)BibTeX
@inproceedings{ying2020neurips-nonconvex,
title = {{Nonconvex Sparse Graph Learning Under Laplacian Constrained Graphical Model}},
author = {Ying, Jiaxi and de Miranda Cardoso, José Vinícius and Palomar, Daniel},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/ying2020neurips-nonconvex/}
}