On a Generalized Notion of Mistake Bounds

Abstract

This paper proposes the use of constructive ordinals as mistake bounds in the on-line learning model. This approach elegantly generalizes the applicability of the on-line mistake bound model to learnability analysis of very expressive concept classes like pattern languages, unions of pattern languages, elementary formal systems, and minimal models of logic programs. The main result in the paper shows that the topological property of effective finite bounded thickness is a sufficient condition for on-line learnability with a certain ordinal mistake bound. An interesting characterization of the on-line learning model is shown in terms of the identification in the limit framework. It is established that the classes of languages learnable in the on-line model with a mistake bound of ? are exactly the same as the classes of languages learnable in the limit from both positive and negative data by a Popperian, consistent learner with a mind change bound of ?. This result nicely builds a bridge between the two models.

Cite

Text

Jain and Sharma. "On a Generalized Notion of Mistake Bounds." Annual Conference on Computational Learning Theory, 1999. doi:10.1145/307400.307450

Markdown

[Jain and Sharma. "On a Generalized Notion of Mistake Bounds." Annual Conference on Computational Learning Theory, 1999.](https://mlanthology.org/colt/1999/jain1999colt-generalized/) doi:10.1145/307400.307450

BibTeX

@inproceedings{jain1999colt-generalized,
  title     = {{On a Generalized Notion of Mistake Bounds}},
  author    = {Jain, Sanjay and Sharma, Arun},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1999},
  pages     = {249-256},
  doi       = {10.1145/307400.307450},
  url       = {https://mlanthology.org/colt/1999/jain1999colt-generalized/}
}