A Gaussian-based feedforward network architecture and complementary training algorithm
- 1 January 1991
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 171-176 vol.1
- https://doi.org/10.1109/ijcnn.1991.170399
Abstract
The author describes a neural network architecture and training procedure that provide an efficient means of modeling complicated surface functions. Essentially, the technique operates by constructing surfaces in a step-wise manner out of Gaussian-shaped bumps and depressions. The rationale behind the approach is explained with reference to a surface modeling interpretation of layered feedforward networks. This is followed by a description of the training procedure, using the modeling of a cowboy-hat-shaped surface as an example problem. The advantages of the technique are that it ensures convergence on a solution to within any tolerance for a set of training patterns, converges rapidly, and circumvents the issue of how many hidden neurons to incorporate in a network. The author also presents a demonstration of how to smooth the output produced by a network and thereby improve its powers of interpolation, this time using the problem of drawing a square as an example.Keywords
This publication has 2 references indexed in Scilit:
- Fast Learning in Networks of Locally-Tuned Processing UnitsNeural Computation, 1989
- Parallel Distributed ProcessingPublished by MIT Press ,1986