Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization

Abstract

We propose a nonconvex estimator for the covariate adjusted precision matrix estimation problem in the high dimensional regime, under sparsity constraints. To solve this estimator, we propose an alternating gradient descent algorithm with hard thresholding. Compared with existing methods along this line of research, which lack theoretical guarantees in optimization error and/or statistical error, the proposed algorithm not only is computationally much more efficient with a linear rate of convergence, but also attains the optimal statistical rate up to a logarithmic factor. Thorough experiments on both synthetic and real data support our theory.

Cite

Text

Chen et al. "Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization." International Conference on Machine Learning, 2018.

Markdown

[Chen et al. "Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/chen2018icml-covariate/)

BibTeX

@inproceedings{chen2018icml-covariate,
  title     = {{Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization}},
  author    = {Chen, Jinghui and Xu, Pan and Wang, Lingxiao and Ma, Jian and Gu, Quanquan},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {922-931},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/chen2018icml-covariate/}
}