Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks

Abstract
This paper describes a new algorithm with neuron-by-neuron computation methods for the gradient vector and the Jacobian matrix. The algorithm can handle networks with arbitrarily connected neurons. The training speed is comparable with the Levenberg-Marquardt algorithm, which is currently considered by many as the fastest algorithm for neural network training. More importantly, it is shown that the computation of the Jacobian, which is required for second-order algorithms, has a similar computation complexity as the computation of the gradient for first-order learning methods. This new algorithm is implemented in the newly developed software, Neural Network Trainer, which has unique capabilities of handling arbitrarily connected networks. These networks with connections across layers can be more efficient than commonly used multilayer perceptron networks.

This publication has 28 references indexed in Scilit: