Abstract
A number of procedures are described for finding delta E/ delta W/sub ij/ where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and w/sub ij/ are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E, so these procedures form the kernels of connectionist learning algorithms. Simulations in which networks are taught to move through limit cycles are shown, along with some empirical perturbation sensitivity tests. The author describes a number of elaborations of the basic idea, including mutable time delays and teacher forcing. He includes a complexity analysis of the various learning procedures discussed and analyzed. Temporally continuous recurrent networks seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.<>