On the Generalization Ability of On-Line Learning Algorithms

Abstract

In this paper we show that on-line algorithms for classification and re- gression can be naturally used to obtain hypotheses with good data- dependent tail bounds on their risk. Our results are proven without re- quiring complicated concentration-of-measure arguments and they hold for arbitrary on-line learning algorithms. Furthermore, when applied to concrete on-line algorithms, our results yield tail bounds that in many cases are comparable or better than the best known bounds.

Cite

Text

Cesa-bianchi et al. "On the Generalization Ability of On-Line Learning Algorithms." Neural Information Processing Systems, 2001.

Markdown

[Cesa-bianchi et al. "On the Generalization Ability of On-Line Learning Algorithms." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/cesabianchi2001neurips-generalization/)

BibTeX

@inproceedings{cesabianchi2001neurips-generalization,
  title     = {{On the Generalization Ability of On-Line Learning Algorithms}},
  author    = {Cesa-bianchi, Nicolò and Conconi, Alex and Gentile, Claudio},
  booktitle = {Neural Information Processing Systems},
  year      = {2001},
  pages     = {359-366},
  url       = {https://mlanthology.org/neurips/2001/cesabianchi2001neurips-generalization/}
}