List-Decodable Linear Regression

Abstract

We give the first polynomial-time algorithm for robust regression in the list-decodable setting where an adversary can corrupt a greater than 1/2 fraction of examples. For any \alpha < 1, our algorithm takes as input a sample (x_i,y_i)_{i \leq n} of n linear equations where \alpha n of the equations satisfy y_i = \langle x_i,\ell^*\rangle +\zeta for some small noise \zeta and (1-\alpha) n of the equations are {\em arbitrarily} chosen. It outputs a list L of size O(1/\alpha) - a fixed constant - that contains an \ell that is close to \ell^*. Our algorithm succeeds whenever the inliers are chosen from a certifiably anti-concentrated distribution D. In particular, this gives a (d/\alpha)^{O(1/\alpha^8)} time algorithm to find a O(1/\alpha) size list when the inlier distribution is a standard Gaussian. For discrete product distributions that are anti-concentrated only in regular directions, we give an algorithm that achieves similar guarantee under the promise that \ell^* has all coordinates of the same magnitude. To complement our result, we prove that the anti-concentration assumption on the inliers is information-theoretically necessary. To solve the problem we introduce a new framework for list-decodable learning that strengthens the ``identifiability to algorithms'' paradigm based on the sum-of-squares method.

Cite

Text

Karmalkar et al. "List-Decodable Linear Regression." Neural Information Processing Systems, 2019.

Markdown

[Karmalkar et al. "List-Decodable Linear Regression." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/karmalkar2019neurips-listdecodable/)

BibTeX

@inproceedings{karmalkar2019neurips-listdecodable,
  title     = {{List-Decodable Linear Regression}},
  author    = {Karmalkar, Sushrut and Klivans, Adam and Kothari, Pravesh},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {7425-7434},
  url       = {https://mlanthology.org/neurips/2019/karmalkar2019neurips-listdecodable/}
}