Generalization in feed forward neural networks

Abstract
It is noted that many aspects of the problem of improving generalization in feedforward neural networks have not been studied in any depth. The authors address the importance of this problem and propose two techniques to improve generalizations; proper selection of the training ensemble, and a partitioned learning strategy. These techniques are applied to a complex 2D classification problem. They also evaluate network generalization while using the cascade correlation learning architecture. It is shown that generalization is not trivial when decision boundaries are complex, proper selection of the training sample can improve generalization, and the use of a partitioned learning strategy can further enhance generalization in feedforward networks. Results also suggest that cascade correlation yields good generalization on test data.

This publication has 3 references indexed in Scilit: