Fat-Shattering and the Learnability of Real-Valued Functions

Abstract

We consider the problem of learning real-valued functions from random examples when the function values are corrupted with noise. With mild conditions on independent observation noise, we provide characterizations of the learnability of a real-valued function class in terms of a generalization of the Vapnik-Chervonenkis dimension, the fat shattering function, introduced by Kearns and Schapire. We show that, given some restrictions on the noise, a function class is learnable in our model if and only if its fat-shattering function is finite. With different (also quite mild) restrictions, satisfied for example by gaussian noise, we show that a function class is learnable from polynomially many examples if and only if its fat-shattering function grows polynomially. We prove analogous results in an agnostic setting, where there is no assumption of an underlying function class.

Cite

Text

Bartlett et al. "Fat-Shattering and the Learnability of Real-Valued Functions." Annual Conference on Computational Learning Theory, 1994. doi:10.1145/180139.181158

Markdown

[Bartlett et al. "Fat-Shattering and the Learnability of Real-Valued Functions." Annual Conference on Computational Learning Theory, 1994.](https://mlanthology.org/colt/1994/bartlett1994colt-fat/) doi:10.1145/180139.181158

BibTeX

@inproceedings{bartlett1994colt-fat,
  title     = {{Fat-Shattering and the Learnability of Real-Valued Functions}},
  author    = {Bartlett, Peter L. and Long, Philip M. and Williamson, Robert C.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1994},
  pages     = {299-310},
  doi       = {10.1145/180139.181158},
  url       = {https://mlanthology.org/colt/1994/bartlett1994colt-fat/}
}