Reliable Agnostic Learning

Abstract

It is well known that in many applications erroneous predictions of one type or another must be avoided. In some applications, like spam detection, false positive errors are serious problems. In other applications, like medical diagnosis, abstaining from making a prediction may be more desirable than making an incorrect prediction. In this paper we consider different types of reliable classifiers suited for such situations. We formalize the notion and study properties of reliable classifiers in the spirit of agnostic learning (Haussler, 1992; Kearns, Schapire, and Sellie, 1994), a PAC-like model where no assumption is made on the function being learned. We then give two algorithms for reliable agnostic learning under natural distributions. The first reliably learns DNFs with no false positives using membership queries. The second reliably learns halfspaces from random examples with no false positives or false negatives, but the classifier sometimes abstains from making predictions.

Cite

Text

Kalai et al. "Reliable Agnostic Learning." Annual Conference on Computational Learning Theory, 2009. doi:10.1016/j.jcss.2011.12.026

Markdown

[Kalai et al. "Reliable Agnostic Learning." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/kalai2009colt-reliable/) doi:10.1016/j.jcss.2011.12.026

BibTeX

@inproceedings{kalai2009colt-reliable,
  title     = {{Reliable Agnostic Learning}},
  author    = {Kalai, Adam Tauman and Kanade, Varun and Mansour, Yishay},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2009},
  doi       = {10.1016/j.jcss.2011.12.026},
  url       = {https://mlanthology.org/colt/2009/kalai2009colt-reliable/}
}