Elementary Estimators for High-Dimensional Linear Regression
Abstract
We consider the problem of structurally constrained high-dimensional linear regression. This has attracted considerable attention over the last decade, with state of the art statistical estimators based on solving regularized convex programs. While these typically non-smooth convex programs can be solved in polynomial time, scaling the state of the art optimization methods to very large-scale problems is an ongoing and rich area of research. In this paper, we attempt to address this scaling issue at the source, by asking whether one can build \emphsimpler possibly closed-form estimators, that yet come with statistical guarantees that are nonetheless comparable to regularized likelihood estimators! We answer this question in the affirmative, with variants of the classical ridge and OLS (ordinary least squares estimators) for linear regression. We analyze our estimators in the high-dimensional setting, and moreover provide empirical corroboration of its performance on simulated as well as real world microarray data.
Cite
Text
Yang et al. "Elementary Estimators for High-Dimensional Linear Regression." International Conference on Machine Learning, 2014.Markdown
[Yang et al. "Elementary Estimators for High-Dimensional Linear Regression." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/yang2014icml-elementary/)BibTeX
@inproceedings{yang2014icml-elementary,
title = {{Elementary Estimators for High-Dimensional Linear Regression}},
author = {Yang, Eunho and Lozano, Aurelie and Ravikumar, Pradeep},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {388-396},
volume = {32},
url = {https://mlanthology.org/icml/2014/yang2014icml-elementary/}
}