Representation of Finite State Automata in Recurrent Radial Basis Function Networks

Abstract

In this paper, we propose some techniques for injecting finite state automata into Recurrent Radial Basis Function networks (R^2BF). When providing proper hints and constraining the weight space properly, we show that these networks behave as automata. A technique is suggested for forcing the learning process to develop automata representations that is based on adding a proper penalty function to the ordinary cost. Successful experimental results are shown for inductive inference of regular grammars.

Cite

Text

Frasconi et al. "Representation of Finite State Automata in Recurrent Radial Basis Function Networks." Machine Learning, 1996. doi:10.1007/BF00116897

Markdown

[Frasconi et al. "Representation of Finite State Automata in Recurrent Radial Basis Function Networks." Machine Learning, 1996.](https://mlanthology.org/mlj/1996/frasconi1996mlj-representation/) doi:10.1007/BF00116897

BibTeX

@article{frasconi1996mlj-representation,
  title     = {{Representation of Finite State Automata in Recurrent Radial Basis Function Networks}},
  author    = {Frasconi, Paolo and Gori, Marco and Maggini, Marco and Soda, Giovanni},
  journal   = {Machine Learning},
  year      = {1996},
  pages     = {5-32},
  doi       = {10.1007/BF00116897},
  volume    = {23},
  url       = {https://mlanthology.org/mlj/1996/frasconi1996mlj-representation/}
}