Fast Learning from Non-I.i.d. Observations

Abstract

We prove an oracle inequality for generic regularized empirical risk minimization algorithms learning from $\a$-mixing processes. To illustrate this oracle inequality, we use it to derive learning rates for some learning methods including least squares SVMs. Since the proof of the oracle inequality uses recent localization ideas developed for independent and identically distributed (i.i.d.) processes, it turns out that these learning rates are close to the optimal rates known in the i.i.d. case.

Cite

Text

Steinwart and Christmann. "Fast Learning from Non-I.i.d. Observations." Neural Information Processing Systems, 2009.

Markdown

[Steinwart and Christmann. "Fast Learning from Non-I.i.d. Observations." Neural Information Processing Systems, 2009.](https://mlanthology.org/neurips/2009/steinwart2009neurips-fast/)

BibTeX

@inproceedings{steinwart2009neurips-fast,
  title     = {{Fast Learning from Non-I.i.d. Observations}},
  author    = {Steinwart, Ingo and Christmann, Andreas},
  booktitle = {Neural Information Processing Systems},
  year      = {2009},
  pages     = {1768-1776},
  url       = {https://mlanthology.org/neurips/2009/steinwart2009neurips-fast/}
}