The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions
- 1 April 1998
- journal article
- research article
- Published by World Scientific Pub Co Pte Ltd in International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
- Vol. 06 (02) , 107-116
- https://doi.org/10.1142/s0218488598000094
Abstract
Recurrent nets are in principle capable to store past inputs to produce the currently desired output. Because of this property recurrent nets are used in time series prediction and process control. Practical applications involve temporal dependencies spanning many time steps, e.g. between relevant inputs and desired outputs. In this case, however, gradient based learning methods take too much time. The extremely increased learning time arises because the error vanishes as it gets propagated back. In this article the de-caying error flow is theoretically analyzed. Then methods trying to overcome vanishing gradients are briefly discussed. Finally, experiments comparing conventional algorithms and alternative methods are presented. With advanced methods long time lag problems can be solved in reasonable time.Keywords
This publication has 0 references indexed in Scilit: