Empirical comparison of various methods for training feed‐Forward neural networks for salinity forecasting
- 1 August 1999
- journal article
- Published by American Geophysical Union (AGU) in Water Resources Research
- Vol. 35 (8) , 2591-2596
- https://doi.org/10.1029/1999wr900150
Abstract
Feed‐forward artificial neural networks (ANNs) are being used increasingly to model water resources variables. In this technical note, six methods for optimizing the connection weights of feedforward ANNs are investigated in terms of generalization ability, parsimony, and training speed. These include the generalized delta (GD) rule, the normalized cumulative delta (NCD) rule, the delta‐bar‐delta (DBD) algorithm, the extended‐delta‐bar‐delta (EDBD) algorithm, the QuickProp (QP) algorithm, and the MaxProp (MP) algorithm. Each of these algorithms is applied to a particular case study, the forecasting of salinity in the River Murray at Murray Bridge, South Australia. Thirty models are developed for each algorithm, starting from different positions in weight space. The results obtained indicate that the generalization ability of the first‐order methods investigated (i.e., GD, NCD, DBD, and EDBD) is better than that of the second‐order algorithms (i.e., QP and MP). When the prediction errors are averaged over the 30 trials carried out, the performance of the first‐order methods in which the size of the steps taken in weight space is automatically adjusted in response to changes in the error surface (i.e., DBD and EDBD) is better than that obtained when predetermined step sizes are used (i.e., GD and NCD). However, the reverse applies when the best forecasts of the 30 trials are considered. The results obtained indicate that the EDBD algorithm is the most parsimonious and the MP algorithm is the least parsimonious. It was found that any impact different learning rules have on training speed is masked by the effect of epoch size and the number of hidden nodes required for optimal model performance.Keywords
This publication has 18 references indexed in Scilit:
- Time Series Forecasting with Neural Networks: A Comparative Study Using the Air Line DataJournal of the Royal Statistical Society Series C: Applied Statistics, 1998
- The effect of internal parameters and geometry on the performance of back-propagation neural networks: an empirical studyEnvironmental Modelling & Software, 1998
- Understanding the behaviour and optimising the performance of back-propagation neural networks: an empirical studyEnvironmental Modelling & Software, 1998
- Efficient Backpropagation Learning Using Optimal Learning Rate and MomentumNeural Networks, 1997
- The Use of Artificial Neural Networks for the Prediction of Water Quality ParametersWater Resources Research, 1996
- Developing Practical Neural Network Applications Using Back‐PropagationComputer-Aided Civil and Infrastructure Engineering, 1994
- Neural Networks: A Review from a Statistical PerspectiveStatistical Science, 1994
- [Neural Networks: A Review from Statistical Perspective]: CommentStatistical Science, 1994
- Learning in Artificial Neural Networks: A Statistical PerspectiveNeural Computation, 1989
- A Stochastic Approximation MethodThe Annals of Mathematical Statistics, 1951