Subspace Embedding and Linear Regression with Orlicz Norm

Abstract

We consider a generalization of the classic linear regression problem to the case when the loss is an Orlicz norm. An Orlicz norm is parameterized by a non-negative convex function G: R_+ - > R_+ with G(0) = 0: the Orlicz norm of a n-dimensional vector x is defined as |x|_G = inf{ alpha > 0 | sum_i = 1^n G( |x_i| / alpha ) < = 1 }. We consider the cases where the function G grows subquadratically. Our main result is based on a new oblivious embedding which embeds the column space of a given nxd matrix A with Orlicz norm into a lower dimensional space with L2 norm. Specifically, we show how to efficiently find an mxn embedding matrix S (m < n), such that for every d-dimensional vector x, we have Omega(1/(d log n)) |Ax|_G < = |SAx|_2 < = O(d^2 log n) |Ax|_G. By applying this subspace embedding technique, we show an approximation algorithm for the regression problem min_x |Ax-b|_G, up to a O( d log^2 n ) factor. As a further application of our techniques, we show how to also use them to improve on the algorithm for the Lp low rank matrix approximation problem for 1 < = p < 2.

Cite

Text

Andoni et al. "Subspace Embedding and Linear Regression with Orlicz Norm." International Conference on Machine Learning, 2018.

Markdown

[Andoni et al. "Subspace Embedding and Linear Regression with Orlicz Norm." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/andoni2018icml-subspace/)

BibTeX

@inproceedings{andoni2018icml-subspace,
  title     = {{Subspace Embedding and Linear Regression with Orlicz Norm}},
  author    = {Andoni, Alexandr and Lin, Chengyu and Sheng, Ying and Zhong, Peilin and Zhong, Ruiqi},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {224-233},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/andoni2018icml-subspace/}
}