A Proximal Newton Framework for Composite Minimization: Graph Learning Without Cholesky Decompositions and Matrix Inversions

Abstract

We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations.

Cite

Text

Tran Dinh et al. "A Proximal Newton Framework for Composite Minimization: Graph Learning Without Cholesky Decompositions and Matrix Inversions." International Conference on Machine Learning, 2013.

Markdown

[Tran Dinh et al. "A Proximal Newton Framework for Composite Minimization: Graph Learning Without Cholesky Decompositions and Matrix Inversions." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/trandinh2013icml-proximal/)

BibTeX

@inproceedings{trandinh2013icml-proximal,
  title     = {{A Proximal Newton Framework for Composite Minimization: Graph Learning Without Cholesky Decompositions and Matrix Inversions}},
  author    = {Tran Dinh, Quoc and Kyrillidis, Anastasios and Cevher, Volkan},
  booktitle = {International Conference on Machine Learning},
  year      = {2013},
  pages     = {271-279},
  volume    = {28},
  url       = {https://mlanthology.org/icml/2013/trandinh2013icml-proximal/}
}