Tangent Prop - A Formalism for Specifying Selected Invariances in an Adaptive Network

Abstract

In many machine learning applications, one has access, not only to training data, but also to some high-level a priori knowledge about the desired be(cid:173) havior of the system. For example, it is known in advance that the output of a character recognizer should be invariant with respect to small spa(cid:173) tial distortions of the input images (translations, rotations, scale changes, etcetera). We have implemented a scheme that allows a network to learn the deriva(cid:173) tive of its outputs with respect to distortion operators of our choosing. This not only reduces the learning time and the amount of training data, but also provides a powerful language for specifying what generalizations we wish the network to perform.

Cite

Text

Simard et al. "Tangent Prop - A Formalism for Specifying Selected Invariances in an Adaptive Network." Neural Information Processing Systems, 1991.

Markdown

[Simard et al. "Tangent Prop - A Formalism for Specifying Selected Invariances in an Adaptive Network." Neural Information Processing Systems, 1991.](https://mlanthology.org/neurips/1991/simard1991neurips-tangent/)

BibTeX

@inproceedings{simard1991neurips-tangent,
  title     = {{Tangent Prop - A Formalism for Specifying Selected Invariances in an Adaptive Network}},
  author    = {Simard, Patrice and Victorri, Bernard and LeCun, Yann and Denker, John},
  booktitle = {Neural Information Processing Systems},
  year      = {1991},
  pages     = {895-903},
  url       = {https://mlanthology.org/neurips/1991/simard1991neurips-tangent/}
}