Transformation Invariant Autoassociation with Application to Handwritten Character Recognition

Abstract

When training neural networks by the classical backpropagation algo(cid:173) rithm the whole problem to learn must be expressed by a set of inputs and desired outputs. However, we often have high-level knowledge about the learning problem. In optical character recognition (OCR), for in(cid:173) stance, we know that the classification should be invariant under a set of transformations like rotation or translation. We propose a new modular classification system based on several autoassociative multilayer percep(cid:173) trons which allows the efficient incorporation of such knowledge. Results are reported on the NIST database of upper case handwritten letters and compared to other approaches to the invariance problem.

Cite

Text

Schwenk and Milgram. "Transformation Invariant Autoassociation with Application to Handwritten Character Recognition." Neural Information Processing Systems, 1994.

Markdown

[Schwenk and Milgram. "Transformation Invariant Autoassociation with Application to Handwritten Character Recognition." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/schwenk1994neurips-transformation/)

BibTeX

@inproceedings{schwenk1994neurips-transformation,
  title     = {{Transformation Invariant Autoassociation with Application to Handwritten Character Recognition}},
  author    = {Schwenk, Holger and Milgram, Maurice},
  booktitle = {Neural Information Processing Systems},
  year      = {1994},
  pages     = {992-998},
  url       = {https://mlanthology.org/neurips/1994/schwenk1994neurips-transformation/}
}