Abstract
A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.

This publication has 8 references indexed in Scilit: