Learning state space trajectories in recurrent neural networks
- 1 January 1989
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 365-372 vol.2
- https://doi.org/10.1109/ijcnn.1989.118724
Abstract
A number of procedures are described for finding delta E/ delta W/sub ij/ where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and w/sub ij/ are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E, so these procedures form the kernels of connectionist learning algorithms. Simulations in which networks are taught to move through limit cycles are shown, along with some empirical perturbation sensitivity tests. The author describes a number of elaborations of the basic idea, including mutable time delays and teacher forcing. He includes a complexity analysis of the various learning procedures discussed and analyzed. Temporally continuous recurrent networks seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.<>Keywords
This publication has 7 references indexed in Scilit:
- Generalization of backpropagation with application to a recurrent gas market modelNeural Networks, 1988
- Generalization of back-propagation to recurrent neural networksPhysical Review Letters, 1987
- Parallel Distributed ProcessingPublished by MIT Press ,1986
- “Neural” computation of decisions in optimization problemsBiological Cybernetics, 1985
- A Learning Algorithm for Boltzmann Machines*Cognitive Science, 1985
- Optimization by Simulated AnnealingScience, 1983
- A Steepest-Ascent Method for Solving Optimum Programming ProblemsJournal of Applied Mechanics, 1962