Polynomial Iniform Convergence and Polynomial-Sample Learnability

Abstract

In this work we study the relationship between PAC learning and the property of uniform convergence. We define the concept of polynomial uniform convergence of relative frequencies to probabilities in the distribution–dependent context. Let Xn = (0,1)n, let Pn be a probability distribution on Xn and let Fn⊂2xn be a class of events. The family (Xn, Pn, Fn)n≥1 is said to be polynomially uniformly convergent if, for all n, the probability that the maximum difference (over Fn) between the relative frequency and probability of an event exceed a given positive ε is at most δ (0 < δ < 1), when the sample on which the frequency is evaluated has size polynomial in n, 1/ε, 1/δ. Given at-sample(x1,…,xt), let Cn(t)(x1,…,xt) be the Vapnik-Chervonenkis dimension (VCdim) of the set (x1,…xt ∩ f | f ϵ Fn and M(n, t) the expectation E(Cn(t)/t). The results we obtain are:

Cite

Text

Bertoni et al. "Polynomial Iniform Convergence and Polynomial-Sample Learnability." Annual Conference on Computational Learning Theory, 1992. doi:10.1145/130385.130414

Markdown

[Bertoni et al. "Polynomial Iniform Convergence and Polynomial-Sample Learnability." Annual Conference on Computational Learning Theory, 1992.](https://mlanthology.org/colt/1992/bertoni1992colt-polynomial/) doi:10.1145/130385.130414

BibTeX

@inproceedings{bertoni1992colt-polynomial,
  title     = {{Polynomial Iniform Convergence and Polynomial-Sample Learnability}},
  author    = {Bertoni, Alberto and Campadelli, Paola and Morpurgo, Anna and Panizza, Sandra},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1992},
  pages     = {265-271},
  doi       = {10.1145/130385.130414},
  url       = {https://mlanthology.org/colt/1992/bertoni1992colt-polynomial/}
}