Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
- 1 January 1994
- journal article
- research article
- Published by Taylor & Francis in Optimization Methods and Software
- Vol. 4 (2) , 103-116
- https://doi.org/10.1080/10556789408805581
Abstract
A general convergence theorem is proposed for a family of serial and parallel nonmonotone unconstrained minimization methods with perturbations. A principal application of the theorem is to establish convergence of backpropagation (BP), the classical algorithm for training artificial neural networks. Under certain natural assumptions, such as divergence of the sum of the learning rates and convergence of the sum of their squares, it is shown that every accumulation point of the BP iterates is a stationary point of the error function associated with the given set of training examples. The results presented cover serial and parallel BP, as well as modified BP with a momentum term.Keywords
This publication has 9 references indexed in Scilit:
- Mathematical Programming in Neural NetworksINFORMS Journal on Computing, 1993
- Introduction to the Theory of Neural ComputationPhysics Today, 1991
- On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward NetworksNeural Computation, 1991
- Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network ModelsJournal of the American Statistical Association, 1989
- Numerical Techniques for Stochastic OptimizationPublished by Springer Nature ,1988
- Parallel Distributed ProcessingPublished by MIT Press ,1986
- On the gradient-projection method for solving the nonsymmetric linear complementarity problemJournal of Optimization Theory and Applications, 1984
- Alternative proofs of the convergence properties of the conjugate-gradient methodJournal of Optimization Theory and Applications, 1974
- 10 Applications of the Stochastic Approximation MethodsPublished by Elsevier ,1970