Learning Finite State Machines With Self-Clustering Recurrent Networks
- 1 November 1993
- journal article
- Published by MIT Press in Neural Computation
- Vol. 5 (6) , 976-990
- https://doi.org/10.1162/neco.1993.5.6.976
Abstract
Recent work has shown that recurrent neural networks have the ability to learn finite state automata from examples. In particular, networks using second-order units have been successful at this task. In studying the performance and learning behavior of such networks we have found that the second-order network model attempts to form clusters in activation space as its internal representation of states. However, these learned states become unstable as longer and longer test input strings are presented to the network. In essence, the network “forgets” where the individual states are in activation space. In this paper we propose a new method to force such a network to learn stable states by introducing discretization into the network and using a pseudo-gradient learning rule to perform training. The essence of the learning rule is that in doing gradient descent, it makes use of the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function, while still using the discretized value in the feedback update path. The new structure uses isolated points in activation space instead of vague clusters as its internal representation of states. It is shown to have similar capabilities in learning finite state automata as the original network, but without the instability problem. The proposed pseudo-gradient learning rule may also be used as a basis for training other types of networks that have hard-limiting threshold activation functions.Keywords
This publication has 10 references indexed in Scilit:
- Learning and Extracting Finite State Automata with Second-Order Recurrent Neural NetworksNeural Computation, 1992
- Distributed Representations, Simple Recurrent Networks, And Grammatical StructureMachine Learning, 1991
- Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent NetworksMachine Learning, 1991
- Finding structure in timeCognitive Science, 1990
- Finite State Automata and Simple Recurrent NetworksNeural Computation, 1989
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Efficient regular grammatical inference techniques by the use of partial similarities and their logical relationshipsPattern Recognition, 1988
- Inductive Inference: Theory and MethodsACM Computing Surveys, 1983
- Inference of Reversible LanguagesJournal of the ACM, 1982
- On the complexity of minimum inference of regular setsInformation and Control, 1978