The Recurrent Cascade-Correlation Architecture
Abstract
Recurrent Cascade-Correlation CRCC) is a recurrent version of the Cascade(cid:173) Correlation learning architecture of Fah I man and Lebiere [Fahlman, 1990]. RCC can learn from examples to map a sequence of inputs into a desired sequence of outputs. New hidden units with recurrent connections are added to the network as needed during training. In effect, the network builds up a finite-state machine tailored specifically for the current problem. RCC retains the advantages of Cascade-Correlation: fast learning, good generalization, automatic construction of a near-minimal multi-layered network, and incremental training.
Cite
Text
Fahlman. "The Recurrent Cascade-Correlation Architecture." Neural Information Processing Systems, 1990.Markdown
[Fahlman. "The Recurrent Cascade-Correlation Architecture." Neural Information Processing Systems, 1990.](https://mlanthology.org/neurips/1990/fahlman1990neurips-recurrent/)BibTeX
@inproceedings{fahlman1990neurips-recurrent,
title = {{The Recurrent Cascade-Correlation Architecture}},
author = {Fahlman, Scott E.},
booktitle = {Neural Information Processing Systems},
year = {1990},
pages = {190-196},
url = {https://mlanthology.org/neurips/1990/fahlman1990neurips-recurrent/}
}