Toward Efficient Agnostic Learning

Abstract

In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation.

Cite

Text

Kearns et al. "Toward Efficient Agnostic Learning." Annual Conference on Computational Learning Theory, 1992. doi:10.1145/130385.130424

Markdown

[Kearns et al. "Toward Efficient Agnostic Learning." Annual Conference on Computational Learning Theory, 1992.](https://mlanthology.org/colt/1992/kearns1992colt-efficient/) doi:10.1145/130385.130424

BibTeX

@inproceedings{kearns1992colt-efficient,
  title     = {{Toward Efficient Agnostic Learning}},
  author    = {Kearns, Michael J. and Schapire, Robert E. and Sellie, Linda},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1992},
  pages     = {341-352},
  doi       = {10.1145/130385.130424},
  url       = {https://mlanthology.org/colt/1992/kearns1992colt-efficient/}
}