Outlier-Robust Estimation of a Sparse Linear Model Using $\ell_1$-Penalized Huber's $m$-Estimator
Abstract
We study the problem of estimating a $p$-dimensional $s$-sparse vector in a linear model with Gaussian design. In the case where the labels are contaminated by at most $o$ adversarial outliers, we prove that the $\ell_1$-penalized Huber's $M$-estimator based on $n$ samples attains the optimal rate of convergence $(s/n)^{1/2} + (o/n)$, up to a logarithmic factor. For more general design matrices, our results highlight the importance of two properties: the transfer principle and the incoherence property. These properties with suitable constants are shown to yield the optimal rates of robust estimation with adversarial contamination.
Cite
Text
Dalalyan and Thompson. "Outlier-Robust Estimation of a Sparse Linear Model Using $\ell_1$-Penalized Huber's $m$-Estimator." Neural Information Processing Systems, 2019.Markdown
[Dalalyan and Thompson. "Outlier-Robust Estimation of a Sparse Linear Model Using $\ell_1$-Penalized Huber's $m$-Estimator." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/dalalyan2019neurips-outlierrobust/)BibTeX
@inproceedings{dalalyan2019neurips-outlierrobust,
title = {{Outlier-Robust Estimation of a Sparse Linear Model Using $\ell_1$-Penalized Huber's $m$-Estimator}},
author = {Dalalyan, Arnak and Thompson, Philip},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {13188-13198},
url = {https://mlanthology.org/neurips/2019/dalalyan2019neurips-outlierrobust/}
}