Computation-Risk Tradeoffs for Covariance-Thresholded Regression
Abstract
We present a family of linear regression estimators that provides a fine-grained tradeoff between statistical accuracy and computational efficiency. The estimators are based on hard thresholding of the sample covariance matrix entries together with l2-regularizion(ridge regression). We analyze the predictive risk of this family of estimators as a function of the threshold and regularization parameter. With appropriate parameter choices, the estimate is the solution to a sparse, diagonally dominant linear system, solvable in near-linear time. Our analysis shows how the risk varies with the sparsity and regularization level, thus establishing a statistical estimation setting for which there is an explicit, smooth tradeoff between risk and computation. Simulations are provided to support the theoretical analyses.
Cite
Text
Shender and Lafferty. "Computation-Risk Tradeoffs for Covariance-Thresholded Regression." International Conference on Machine Learning, 2013.Markdown
[Shender and Lafferty. "Computation-Risk Tradeoffs for Covariance-Thresholded Regression." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/shender2013icml-computationrisk/)BibTeX
@inproceedings{shender2013icml-computationrisk,
title = {{Computation-Risk Tradeoffs for Covariance-Thresholded Regression}},
author = {Shender, Dinah and Lafferty, John},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {756-764},
volume = {28},
url = {https://mlanthology.org/icml/2013/shender2013icml-computationrisk/}
}