Linear Regression with Limited Observation

Abstract

We consider the most common variants of linear regression, including Ridge, Lasso and Support-vector regression, in a setting where the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple and efficient algorithms for these problems: for Lasso and Ridge regression they need the same total number of attributes (up to constants) as do full-information algorithms, for reaching a certain accuracy. For Support-vector regression, we require exponentially less attributes compared to the state of the art. By that, we resolve an open problem recently posed by Cesa-Bianchi et al. (2010). Experiments show the theoretical bounds to be justified by superior performance compared to the state of the art.

Cite

Text

Hazan and Koren. "Linear Regression with Limited Observation." International Conference on Machine Learning, 2012.

Markdown

[Hazan and Koren. "Linear Regression with Limited Observation." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/hazan2012icml-linear/)

BibTeX

@inproceedings{hazan2012icml-linear,
  title     = {{Linear Regression with Limited Observation}},
  author    = {Hazan, Elad and Koren, Tomer},
  booktitle = {International Conference on Machine Learning},
  year      = {2012},
  url       = {https://mlanthology.org/icml/2012/hazan2012icml-linear/}
}