Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples

Abstract

Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory, and we have previously shown (Poggio and Girosi, 1990a, 1990b) the equivalence between reglilari~at.ioll and a. class of three-layer networks that we call regularization networks. In this note, we ext.end the theory by introducing ways of <lealing with t.wo aspect.s of learning: learning in presence of unreliable examples or outliel·s, an<llearning from positive and negative examples.

Cite

Text

Girosi et al. "Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples." Neural Information Processing Systems, 1990.

Markdown

[Girosi et al. "Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples." Neural Information Processing Systems, 1990.](https://mlanthology.org/neurips/1990/girosi1990neurips-extensions/)

BibTeX

@inproceedings{girosi1990neurips-extensions,
  title     = {{Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples}},
  author    = {Girosi, Federico and Poggio, Tomaso and Caprile, Bruno},
  booktitle = {Neural Information Processing Systems},
  year      = {1990},
  pages     = {750-756},
  url       = {https://mlanthology.org/neurips/1990/girosi1990neurips-extensions/}
}