Competitive On-Line Linear Regression
Abstract
We apply a general algorithm for merging prediction strategies (the Aggregating Algorithm) to the problem of linear regression with the square loss; our main assumption is that the response variable is bounded. It turns out that for this particular problem the Aggre(cid:173) gating Algorithm resembles, but is slightly different from, the well(cid:173) known ridge estimation procedure. From general results about the Aggregating Algorithm we deduce a guaranteed bound on the dif(cid:173) ference between our algorithm's performance and the best, in some sense, linear regression function's performance. We show that the AA attains the optimal constant in our bound, whereas the con(cid:173) stant attained by the ridge regression procedure in general can be 4 times worse.
Cite
Text
Vovk. "Competitive On-Line Linear Regression." Neural Information Processing Systems, 1997.Markdown
[Vovk. "Competitive On-Line Linear Regression." Neural Information Processing Systems, 1997.](https://mlanthology.org/neurips/1997/vovk1997neurips-competitive/)BibTeX
@inproceedings{vovk1997neurips-competitive,
title = {{Competitive On-Line Linear Regression}},
author = {Vovk, Volodya},
booktitle = {Neural Information Processing Systems},
year = {1997},
pages = {364-370},
url = {https://mlanthology.org/neurips/1997/vovk1997neurips-competitive/}
}