Learning State Space Trajectories in Recurrent Neural Networks
- 1 June 1989
- journal article
- research article
- Published by MIT Press in Neural Computation
- Vol. 1 (2) , 263-269
- https://doi.org/10.1162/neco.1989.1.2.263
Abstract
Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding ∂E/∂wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown. This type of recurrent network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.This publication has 4 references indexed in Scilit:
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Generalization of backpropagation with application to a recurrent gas market modelNeural Networks, 1988
- Generalization of back-propagation to recurrent neural networksPhysical Review Letters, 1987
- A Steepest-Ascent Method for Solving Optimum Programming ProblemsJournal of Applied Mechanics, 1962