A rationalized error back-propagation learning algorithm

Abstract
A method is proposed for learning in multilayer perceptrons. It includes self-adapting features that make it suited to deal with a variety of problems without the need for parameter readjustments. The validity of the approach is benchmarked for two types of problems. The first benchmark is performed for the topologically complex parity problem. The number of binary inputs range from, 2 (simplest Exclusive OR problem) to 7 (much more complex problem). The statistically averaged learning times obtained, which are reduced by two to three orders of magnitude, are compared with the best possible results obtained by conventional error backpropagation (EBP). The second problem type occurs when a high accuracy in separating example classes is needed. This corresponds to instances where different output sign patterns of the MLP are requested for slightly different input variables. When the minimum Euclidean distance epsilon between the classes to be separated decreases, the best learning times obtained with conventional EBP increase roughly as I/ epsilon /sup 2/. The present algorithm yields substantially shorter learning times that behave like log (1/ epsilon ).

This publication has 4 references indexed in Scilit: