Learning in the Recurrent Random Neural Network
- 1 January 1993
- journal article
- Published by MIT Press in Neural Computation
- Vol. 5 (1) , 154-164
- https://doi.org/10.1162/neco.1993.5.1.154
Abstract
The capacity to learn from examples is one of the most desirable features of neural network models. We present a learning algorithm for the recurrent random network model (Gelenbe 1989, 1990) using gradient descent of a quadratic error function. The analytical properties of the model lead to a "backpropagation" type algorithm that requires the solution of a system of n linear and n nonlinear equations each time the n-neuron network "learns" a new input-output pair.Keywords
This publication has 5 references indexed in Scilit:
- A Learning Algorithm for Boltzmann Machines*Published by Wiley ,2010
- Stability of the Random Neural Network ModelNeural Computation, 1990
- Random Neural Networks with Negative and Positive Signals and Product Form SolutionNeural Computation, 1989
- Learning State Space Trajectories in Recurrent Neural NetworksNeural Computation, 1989
- Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural ComputationNeural Computation, 1989