Bounds on the Complexity of Recurrent Neural Network Implementations of Finite State Machines
Abstract
In this paper the efficiency of recurrent neural network implementa(cid:173) tions of m-state finite state machines will be explored. Specifically, it will be shown that the node complexity for the unrestricted case can be bounded above by 0 ( fo) . It will also be shown that the node complexity is 0 (y'm log m) when the weights and thresholds are restricted to the set -I, I, and 0 (m) when the fan-in is re(cid:173) stricted to two. Matching lower bounds will be provided for each of these upper bounds assuming that the state of the FSM can be encoded in a subset of the nodes of size rlog m 1.
Cite
Text
Horne and Hush. "Bounds on the Complexity of Recurrent Neural Network Implementations of Finite State Machines." Neural Information Processing Systems, 1993.Markdown
[Horne and Hush. "Bounds on the Complexity of Recurrent Neural Network Implementations of Finite State Machines." Neural Information Processing Systems, 1993.](https://mlanthology.org/neurips/1993/horne1993neurips-bounds/)BibTeX
@inproceedings{horne1993neurips-bounds,
title = {{Bounds on the Complexity of Recurrent Neural Network Implementations of Finite State Machines}},
author = {Horne, Bill G. and Hush, Don R.},
booktitle = {Neural Information Processing Systems},
year = {1993},
pages = {359-366},
url = {https://mlanthology.org/neurips/1993/horne1993neurips-bounds/}
}