High-Dimensional Variable Selection with Sparse Random Projections: Measurement Sparsity and Statistical Efficiency
Abstract
We consider the problem of high-dimensional variable selection: given n noisy observations of a k-sparse vector β* ∈ Rp, estimate the subset of non-zero entries of β*. A significant body of work has studied behavior of l1-relaxations when applied to random measurement matrices that are dense (e.g., Gaussian, Bernoulli). In this paper, we analyze sparsified measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction γ of non-zero entries, and the statistical efficiency, as measured by the minimal number of observations n required for correct variable selection with probability converging to one. Our main result is to prove that it is possible to let the fraction on non-zero entries γ → 0 at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row, while retaining the same statistical efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.
Cite
Text
Omidiran and Wainwright. "High-Dimensional Variable Selection with Sparse Random Projections: Measurement Sparsity and Statistical Efficiency." Journal of Machine Learning Research, 2010.Markdown
[Omidiran and Wainwright. "High-Dimensional Variable Selection with Sparse Random Projections: Measurement Sparsity and Statistical Efficiency." Journal of Machine Learning Research, 2010.](https://mlanthology.org/jmlr/2010/omidiran2010jmlr-highdimensional/)BibTeX
@article{omidiran2010jmlr-highdimensional,
title = {{High-Dimensional Variable Selection with Sparse Random Projections: Measurement Sparsity and Statistical Efficiency}},
author = {Omidiran, Dapo and Wainwright, Martin J.},
journal = {Journal of Machine Learning Research},
year = {2010},
pages = {2361-2386},
volume = {11},
url = {https://mlanthology.org/jmlr/2010/omidiran2010jmlr-highdimensional/}
}