Making use of population information in evolutionary artificial neural networks
- 1 June 1998
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)
- Vol. 28 (3) , 417-425
- https://doi.org/10.1109/3477.678637
Abstract
This paper is concerned with the simultaneous evolution of artificial neural network (ANN) architectures and weights. The current practice in evolving ANN's is to choose the best ANN in the last generation as the final result. This paper proposes a different approach to form the final result by combining all the individuals in the last generation in order to make best use of all the information contained in the whole population. This approach regards a population of ANN's as an ensemble and uses a combination method to integrate them. Although there has been some work on integrating ANN modules, little has been done in evolutionary learning to make best use of its population information. Four linear combination methods have been investigated in this paper to illustrate our ideas. Three real-world data sets have been used in our experimental studies, which show that the recursive least-square (RLS) algorithm always produces an integrated system that outperforms the best individual. The results confirm that a population contains more information than a single individual. Evolutionary learning should exploit such information to improve generalization of learned systems.Keywords
This publication has 25 references indexed in Scilit:
- Automatic modularization by speciationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- A dilemma for fitness sharing with a scaling functionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Every niching method has its niche: Fitness sharing and implicit sharing comparedPublished by Springer Nature ,1996
- Learning in linear neural networks: a surveyIEEE Transactions on Neural Networks, 1995
- An algorithm to generate radial basis function (RBF)-like nets for classification problemsNeural Networks, 1995
- Progress in supervised neural networksIEEE Signal Processing Magazine, 1993
- A scaled conjugate gradient algorithm for fast supervised learningNeural Networks, 1993
- BACKPROPAGATION LEARNING FOR MULTILAYER FEED-FORWARD NEURAL NETWORKS USING THE CONJUGATE GRADIENT METHODInternational Journal of Neural Systems, 1991
- An information criterion for optimal neural network selectionIEEE Transactions on Neural Networks, 1991
- Adaptive Filters and EqualisersPublished by Springer Nature ,1988