Learning long-term dependencies with gradient descent is difficult
- 1 March 1994
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 5 (2) , 157-166
- https://doi.org/10.1109/72.279181
Abstract
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.<>Keywords
This publication has 12 references indexed in Scilit:
- Inserting rules into recurrent neural networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- The problem of learning long-term dependencies in recurrent networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Global optimization of a neural network-hidden Markov model hybridIEEE Transactions on Neural Networks, 1992
- Using random weights to train multilayer networks of hard-limiting unitsIEEE Transactions on Neural Networks, 1992
- Local Feedback Multilayered NetworksNeural Computation, 1992
- Nonlinear dynamics and stability of analog neural networksPhysica D: Nonlinear Phenomena, 1991
- BPS: a learning algorithm for capturing the dynamic nature of speechPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available hereACM Transactions on Mathematical Software, 1987
- Learning Process in an Asymmetric Threshold NetworkPublished by Springer Nature ,1986
- Optimization by Simulated AnnealingScience, 1983