First-order versus second-order single-layer recurrent neural networks
- 1 May 1994
- journal article
- letter
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 5 (3) , 511-513
- https://doi.org/10.1109/72.286928
Abstract
We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.Keywords
This publication has 7 references indexed in Scilit:
- A structure by which a recurrent neural network can approximate a nonlinear dynamic systemPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- On the computational power of neural netsPublished by Association for Computing Machinery (ACM) ,1992
- Learning and Extracting Finite State Automata with Second-Order Recurrent Neural NetworksNeural Computation, 1992
- The induction of dynamical recognizersMachine Learning, 1991
- Efficient simulation of finite automata by neural netsJournal of the ACM, 1991
- Finding structure in timeCognitive Science, 1990
- Digital systems for artificial neural networksIEEE Circuits and Devices Magazine, 1989