Stable Encoding of Finite-State Machines in Discrete-Time Recurrent Neural Nets with Sigmoid Units
- 1 September 2000
- journal article
- research article
- Published by MIT Press in Neural Computation
- Vol. 12 (9) , 2129-2174
- https://doi.org/10.1162/089976600300015097
Abstract
There has been a lot of interest in the use of discrete-time recurrent neural nets (DTRNN) to learn finite-state tasks, with interesting results regarding the induction of simple finite-state machines from input–output strings. Parallel work has studied the computational power of DTRNN in connection with finite-state computation. This article describes a simple strategy to devise stable encodings of finite-state machines in computationally capable discrete-time recurrent neural architectures with sigmoid units and gives a detailed presentation on how this strategy may be applied to encode a general class of finite-state machines in a variety of commonly used first- and second-order recurrent neural networks. Unlike previous work that either imposed some restrictions to state values or used a detailed analysis based on fixed-point attractors, our approach applies to any positive, bounded, strictly growing, continuous activation function and uses simple bounding criteria based on a study of the conditions under which a proposed encoding scheme guarantees that the DTRNN is actually behaving as a finite-state machine.Keywords
This publication has 33 references indexed in Scilit:
- Correction to Proof That Recurrent Neural Networks Can Robustly Recognize Only Regular LanguagesNeural Computation, 1998
- Inductive inference from noisy examples using the hybrid finite state filterIEEE Transactions on Neural Networks, 1998
- Theory of neuromataJournal of the ACM, 1998
- Discrete time recurrent neural network architectures: A unifying reviewNeurocomputing, 1997
- Stable Encoding of Large Finite-State Automata in Recurrent Neural Networks with Sigmoid DiscriminantsNeural Computation, 1996
- An Algebraic Framework to Represent Finite State Machines in Single-Layer Recurrent Neural NetworksNeural Computation, 1995
- Learning the Initial State of a Second-Order Recurrent Neural Network during Regular-Language InferenceNeural Computation, 1995
- The observers' paradox: apparent computational complexity in physical systemsJournal of Experimental & Theoretical Artificial Intelligence, 1995
- Efficient simulation of finite automata by neural netsJournal of the ACM, 1991
- Finite State Automata and Simple Recurrent NetworksNeural Computation, 1989