Adaptive and Optimal Online Linear Regression on ℓ1-Balls

Abstract

We consider the problem of online linear regression on individual sequences. The goal in this paper is for the forecaster to output sequential predictions which are, after T time rounds, almost as good as the ones output by the best linear predictor in a given ℓ^1-ball in ℝ^ d . We consider both the cases where the dimension d is small and large relative to the time horizon T . We first present regret bounds with optimal dependencies on the sizes U , X and Y of the ℓ^1-ball, the input data and the observations. The minimax regret is shown to exhibit a regime transition around the point $d = \sqrt{T} U X / (2 Y)$ . Furthermore, we present efficient algorithms that are adaptive, i.e., that do not require the knowledge of U , X , Y , and T , but still achieve nearly optimal regret bounds.

Cite

Text

Gerchinovitz and Yu. "Adaptive and Optimal Online Linear Regression on ℓ1-Balls." International Conference on Algorithmic Learning Theory, 2011. doi:10.1007/978-3-642-24412-4_11

Markdown

[Gerchinovitz and Yu. "Adaptive and Optimal Online Linear Regression on ℓ1-Balls." International Conference on Algorithmic Learning Theory, 2011.](https://mlanthology.org/alt/2011/gerchinovitz2011alt-adaptive/) doi:10.1007/978-3-642-24412-4_11

BibTeX

@inproceedings{gerchinovitz2011alt-adaptive,
  title     = {{Adaptive and Optimal Online Linear Regression on ℓ1-Balls}},
  author    = {Gerchinovitz, Sébastien and Yu, Jia Yuan},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2011},
  pages     = {99-113},
  doi       = {10.1007/978-3-642-24412-4_11},
  url       = {https://mlanthology.org/alt/2011/gerchinovitz2011alt-adaptive/}
}