Fast backpropagation learning using steep activation functions and automatic weight reinitialization
- 9 December 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 1587-1592 vol.3
- https://doi.org/10.1109/icsmc.1991.169915
Abstract
Several backpropagation (BP) learning speed-up algorithms that employ the gain parameter, i.e., steepness of the activation function, are examined to determine the effect of increased gain on learning time. It was shown by simulation that although these algorithms can converge faster than the standard BP learning algorithm on some problems, they can be unstable in convergence, i.e., they frequently fail to converge within a finite time. One main reason for this divergence is inappropriate setting of initial weights in the network. To overcome this instability, an automatic random reinitialization of the weights is proposed when convergence speed becomes very slow. BP learning algorithms with this weight reinitialization and larger initial gain (around 2 or 3) were found to be much faster and more stable in convergence than those without weight reinitialization.Keywords
This publication has 4 references indexed in Scilit:
- The effect of initial weights on premature saturation in back-propagation learningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Benefits of gain: speeded learning and minimal hidden layers in back-propagation networksIEEE Transactions on Systems, Man, and Cybernetics, 1991
- Handwritten numeral recognition by multilayered neural network with improved learning algorithmPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- Increased rates of convergence through learning rate adaptationNeural Networks, 1988