Rule revision with recurrent neural networks
- 1 January 1996
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Knowledge and Data Engineering
- Vol. 8 (1) , 183-188
- https://doi.org/10.1109/69.485647
Abstract
-Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect. (By incorrect, we mean that the rules were not the ones in the randomly generated grammar.)Keywords
This publication has 24 references indexed in Scilit:
- Second-order recurrent neural networks for grammatical inferencePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Learning and Extracting Initial Mealy Automata with a Modular Neural Network ModelNeural Computation, 1995
- Unified integration of explicit knowledge and learning by example in recurrent networksIEEE Transactions on Knowledge and Data Engineering, 1995
- Combining symbolic and neural learningMachine Learning, 1994
- Induction of Finite-State Languages Using Second-Order Recurrent NetworksNeural Computation, 1992
- Translation, rotation, and scale invariant pattern recognition by high-order neural networks and moment classifiersIEEE Transactions on Neural Networks, 1992
- Symbolic-neural systems and the use of hints for developing complex systemsInternational Journal of Man-Machine Studies, 1991
- The induction of dynamical recognizersMachine Learning, 1991
- Turing computability with neural netsApplied Mathematics Letters, 1991
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989