The effect of initial weights on premature saturation in back-propagation learning

Abstract
The critical drawback of the backpropagation learning algorithm is its slow error convergence. The major reason for this is the premature saturation, a phenomenon in which the error of a neural network stays almost constant for some period of time during learning. It is known to be caused by an inappropriate set of initial weights. The probability of incorrectly saturated output nodes at the beginning epoch of learning is derived as a function of the range of initial weights, the number of nodes in each layer, and the maximum slope of the sigmoidal activation function. This is verified by Monte Carlo simulation.<>

This publication has 4 references indexed in Scilit: