Feed-forward neural networks
- 1 October 1994
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Potentials
- Vol. 13 (4) , 27-31
- https://doi.org/10.1109/45.329294
Abstract
One critical aspect neural network designers face today is choosing an appropriate network size for a given application. Network size involves in the case of layered neural network architectures, the number of layers in a network, the number of nodes per layer, and the number of connections. Roughly speaking, a neural network implements a nonlinear mapping of u=G(x). The mapping function G is established during a training phase where the network learns to correctly associate input patterns x to output patterns u. Given a set of training examples (x, u), there is probably an infinite number of different size networks that can learn to map input patterns x into output patterns u. The question is, which network size is more appropriate for a given problem? Unfortunately, the answer to this question is not always obvious. Many researchers agree that the quality of a solution found by a neural network depends strongly on the network size used. In general, network size affects network complexity, and learning time. It also affects the generalization capabilities of the network; that is, its ability-to produce accurate results on patterns outside its training set.Keywords
This publication has 6 references indexed in Scilit:
- Pruning algorithms-a surveyIEEE Transactions on Neural Networks, 1993
- Progress in supervised neural networksIEEE Signal Processing Magazine, 1993
- The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural NetworksNeural Computation, 1990
- Backpropagation Applied to Handwritten Zip Code RecognitionNeural Computation, 1989
- Approximation by superpositions of a sigmoidal functionMathematics of Control, Signals, and Systems, 1989
- Theory of the backpropagation neural networkPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989