An Algebraic Framework to Represent Finite State Machines in Single-Layer Recurrent Neural Networks
Open Access
- 1 September 1995
- journal article
- Published by MIT Press in Neural Computation
- Vol. 7 (5) , 931-949
- https://doi.org/10.1162/neco.1995.7.5.931
Abstract
In this paper we present an algebraic framework to represent finite state machines (FSMs) in single-layer recurrent neural networks (SLRNNs), which unifies and generalizes some of the previous proposals. This framework is based on the formulation of both the state transition function and the output function of an FSM as a linear system of equations, and it permits an analytical explanation of the representational capabilities of first-order and higher-order SLRNNs. The framework can be used to insert symbolic knowledge in RNNs prior to learning from examples and to keep this knowledge while training the network. This approach is valid for a wide range of activation functions, whenever some stability conditions are met. The framework has already been used in practice in a hybrid method for grammatical inference reported elsewhere (Sanfeliu and Alquézar 1994).Keywords
This publication has 8 references indexed in Scilit:
- First-order versus second-order single-layer recurrent neural networksIEEE Transactions on Neural Networks, 1994
- Learning Finite State Machines With Self-Clustering Recurrent NetworksNeural Computation, 1993
- Learning and Extracting Finite State Automata with Second-Order Recurrent Neural NetworksNeural Computation, 1992
- The induction of dynamical recognizersMachine Learning, 1991
- Efficient simulation of finite automata by neural netsJournal of the ACM, 1991
- Finding structure in timeCognitive Science, 1990
- Finite State Automata and Simple Recurrent NetworksNeural Computation, 1989
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989