High-Dimensional Robust Mean Estimation via Gradient Descent
Abstract
We study the problem of high-dimensional robust mean estimation in the presence of a constant fraction of adversarial outliers. A recent line of work has provided sophisticated polynomial-time algorithms for this problem with dimension-independent error guarantees for a range of natural distribution families. In this work, we show that a natural non-convex formulation of the problem can be solved directly by gradient descent. Our approach leverages a novel structural lemma, roughly showing that any approximate stationary point of our non-convex objective gives a near-optimal solution to the underlying robust estimation task. Our work establishes an intriguing connection between algorithmic high-dimensional robust statistics and non-convex optimization, which may have broader applications to other robust estimation tasks.
Cite
Text
Cheng et al. "High-Dimensional Robust Mean Estimation via Gradient Descent." International Conference on Machine Learning, 2020.Markdown
[Cheng et al. "High-Dimensional Robust Mean Estimation via Gradient Descent." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/cheng2020icml-highdimensional/)BibTeX
@inproceedings{cheng2020icml-highdimensional,
title = {{High-Dimensional Robust Mean Estimation via Gradient Descent}},
author = {Cheng, Yu and Diakonikolas, Ilias and Ge, Rong and Soltanolkotabi, Mahdi},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {1768-1778},
volume = {119},
url = {https://mlanthology.org/icml/2020/cheng2020icml-highdimensional/}
}