A Second-Order Translation, Rotation and Scale Invariant Neural Network
Abstract
A second-order architecture is presented here for translation, rotation and scale invariant processing of 2-D images mapped to n input units. This new architecture has a complexity of O( n) weights as opposed to the O( n 3 ) weights usually required for a third-order, rotation invariant architecture. The reduction in complexity is due to the use of discrete frequency infor(cid:173) mation. Simulations show favorable comparisons to other neural network architectures.
Cite
Text
Goggin et al. "A Second-Order Translation, Rotation and Scale Invariant Neural Network." Neural Information Processing Systems, 1990.Markdown
[Goggin et al. "A Second-Order Translation, Rotation and Scale Invariant Neural Network." Neural Information Processing Systems, 1990.](https://mlanthology.org/neurips/1990/goggin1990neurips-secondorder/)BibTeX
@inproceedings{goggin1990neurips-secondorder,
title = {{A Second-Order Translation, Rotation and Scale Invariant Neural Network}},
author = {Goggin, Shelly D. D. and Johnson, Kristina M. and Gustafson, Karl E.},
booktitle = {Neural Information Processing Systems},
year = {1990},
pages = {313-319},
url = {https://mlanthology.org/neurips/1990/goggin1990neurips-secondorder/}
}