Learning Nested Differences in the Presence of Malicious Noise

Abstract

We investigate the learnability of nested differences of intersection-closed classes in the presence of malicious noise. Examples of intersection-closed classes include axis-parallel rectangles, monomials, linear sub-spaces, and so forth. We present an on-line algorithm whose mistake bound is optimal in the sense that there are concept classes for which each learning algorithm (using nested differences as hypotheses) can be forced to make at least that many mistakes. We also present an algorithm for learning in the PAC model with malicious noise. Surprisingly enough, the noise rate tolerable by these algorithms does not depend on the complexity of the target class but depends only on the complexity of the underlying intersection-closed class.

Cite

Text

Auer. "Learning Nested Differences in the Presence of Malicious Noise." International Conference on Algorithmic Learning Theory, 1995. doi:10.1007/3-540-60454-5_33

Markdown

[Auer. "Learning Nested Differences in the Presence of Malicious Noise." International Conference on Algorithmic Learning Theory, 1995.](https://mlanthology.org/alt/1995/auer1995alt-learning/) doi:10.1007/3-540-60454-5_33

BibTeX

@inproceedings{auer1995alt-learning,
  title     = {{Learning Nested Differences in the Presence of Malicious Noise}},
  author    = {Auer, Peter},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {1995},
  pages     = {123-137},
  doi       = {10.1007/3-540-60454-5_33},
  url       = {https://mlanthology.org/alt/1995/auer1995alt-learning/}
}