An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies

Abstract

The paper presents a competitive prediction-style upper bound on the square loss of the Aggregating Algorithm for Regression with Changing Dependencies in the linear case. The algorithm is able to compete with a sequence of linear predictors provided the sum of squared Euclidean norms of differences of regression coefficient vectors grows at a sublinear rate.

Cite

Text

Kalnishkan. "An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies." International Conference on Algorithmic Learning Theory, 2016. doi:10.1007/978-3-319-46379-7_16

Markdown

[Kalnishkan. "An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies." International Conference on Algorithmic Learning Theory, 2016.](https://mlanthology.org/alt/2016/kalnishkan2016alt-upper/) doi:10.1007/978-3-319-46379-7_16

BibTeX

@inproceedings{kalnishkan2016alt-upper,
  title     = {{An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies}},
  author    = {Kalnishkan, Yuri},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2016},
  pages     = {238-252},
  doi       = {10.1007/978-3-319-46379-7_16},
  url       = {https://mlanthology.org/alt/2016/kalnishkan2016alt-upper/}
}