New Bounds on Compressive Linear Least Squares Regression

Abstract

In this paper we provide a new analysis of compressive least squares regression that removes a spurious log N factor from previous bounds, where N is the number of training points. Our new bound has a clear interpretation and reveals meaningful structural properties of the linear regression problem that makes it solvable effectively in a small dimensional random subspace. In addition, the main part of our analysis does not require the compressive matrix to have the Johnson-Lindenstrauss property, or the RIP property. Instead, we only require its entries to be drawn i.i.d. from a 0-mean symmetric distribution with finite first four moments.

Cite

Text

Kabán. "New Bounds on Compressive Linear Least Squares Regression." International Conference on Artificial Intelligence and Statistics, 2014.

Markdown

[Kabán. "New Bounds on Compressive Linear Least Squares Regression." International Conference on Artificial Intelligence and Statistics, 2014.](https://mlanthology.org/aistats/2014/kaban2014aistats-new/)

BibTeX

@inproceedings{kaban2014aistats-new,
  title     = {{New Bounds on Compressive Linear Least Squares Regression}},
  author    = {Kabán, Ata},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2014},
  pages     = {448-456},
  url       = {https://mlanthology.org/aistats/2014/kaban2014aistats-new/}
}