Learning Curves: Asymptotic Values and Rate of Convergence

Abstract

Training classifiers on large databases is computationally demand(cid:173) ing. It is desirable to develop efficient procedures for a reliable prediction of a classifier's suitability for implementing a given task, so that resources can be assigned to the most promising candidates or freed for exploring new classifier candidates. We propose such a practical and principled predictive method. Practical because it avoids the costly procedure of training poor classifiers on the whole training set, and principled because of its theoretical foundation. The effectiveness of the proposed procedure is demonstrated for both single- and multi-layer networks.

Cite

Text

Cortes et al. "Learning Curves: Asymptotic Values and Rate of Convergence." Neural Information Processing Systems, 1993.

Markdown

[Cortes et al. "Learning Curves: Asymptotic Values and Rate of Convergence." Neural Information Processing Systems, 1993.](https://mlanthology.org/neurips/1993/cortes1993neurips-learning/)

BibTeX

@inproceedings{cortes1993neurips-learning,
  title     = {{Learning Curves: Asymptotic Values and Rate of Convergence}},
  author    = {Cortes, Corinna and Jackel, L. D. and Solla, Sara A. and Vapnik, Vladimir and Denker, John S.},
  booktitle = {Neural Information Processing Systems},
  year      = {1993},
  pages     = {327-334},
  url       = {https://mlanthology.org/neurips/1993/cortes1993neurips-learning/}
}