Separating PAC and Mistake-Bound Learning Models over the Boolean Domain (Abstract)

Abstract

Abstract Two of the most commonly used theoretical learning models are the distribution-free (PAC, Valiant-style) model in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistake-bound (on-line, unrestricted equivalence query) model in which examples are presented in order by an adversary. Over the Boolean domain 0, 1 n , it is known that if the learner is allowed unlimited computational resources then any concept class learnable in one model is also learnable in the other. In addition, any polynomial-time learning algorithm for a concept class in the mistake-bound model can be transformed into one that learns the class in the PAC model. This paper shows that if one-way functions exist, then the converse does not hold. We present a concept class over 0, 1 n that is learnable in the PAC model, but is not learnable in the absolute mistake-bound model if one-way functions exist. In addition, the concept class created remains hard to learn in the mistake-bound model even if the learner is allowed a polynomial number of membership queries. The concepts created are based upon the Goldreich, Goldwasser and Micali random function construction [1] and involve creating the following new cryptographic object: an exponentially long sequence of strings σ1, σ2, …, σ r over 0, 1 n that is hard to compute in one direction (given σ i one cannot compute σ j for j < i) but is easy to compute and even make random-access jumps in the other direction (given σ i and j > i one can compute σ j , even if j is exponentially larger than i). The properties of this sequence are stronger than those of other such sequences considered, which do not allow random-access jumps forward without knowledge of a seed allowing one to compute backwards as well. The sequence of strings created is useful for separating the learning models for roughly the following reason. Attached to each σ i is a classification that can only be computed given σ j for j < i. So, an adversary can present the strings in the order σ r , σ r-1, … and cause the learner to make a mistake nearly half the time. However, for any distribution over the strings, if a learner collects m samples, we expect that about m/m+1 of the distribution falls on the “easy side” (the set of σ j for j > i) of the string σ i of least index seen. The “random-access forward” property of the sequence then allows the PAC learner to use σ i as a predictor for any σ j for j > i. Notice that without the random-access property, then even on a uniform distribution the learner might still fail because the examples seen would likely all be exponentially far away from each other. An extended abstract will appear in the Proceedings of the 31st Annual Symposium on the Foundations of Computer Science, St. Louis, Missouri, October 22–24, 1990.

Cite

Text

Blum. "Separating PAC and Mistake-Bound Learning Models over the Boolean Domain (Abstract)." Annual Conference on Computational Learning Theory, 1990. doi:10.1016/B978-1-55860-146-8.50039-4

Markdown

[Blum. "Separating PAC and Mistake-Bound Learning Models over the Boolean Domain (Abstract)." Annual Conference on Computational Learning Theory, 1990.](https://mlanthology.org/colt/1990/blum1990colt-separating/) doi:10.1016/B978-1-55860-146-8.50039-4

BibTeX

@inproceedings{blum1990colt-separating,
  title     = {{Separating PAC and Mistake-Bound Learning Models over the Boolean Domain (Abstract)}},
  author    = {Blum, Avrim},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1990},
  pages     = {393},
  doi       = {10.1016/B978-1-55860-146-8.50039-4},
  url       = {https://mlanthology.org/colt/1990/blum1990colt-separating/}
}