How Gaussian radial basis functions work
- 9 December 2002
- proceedings article
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The learning associated with radial basis function networks is investigated. The author examines the method of gradient descent for learning the weights and discusses the nature of the learning process. The results obtained explain why the networks learn and the nonuniform convergence for different frequencies. It is also found that the choice of Gaussians as the basis functions may not be effective in dealing with the high-frequency components of a complicated mapping due to excessively slow convergence. Computer simulation is used to show that some simple choice of a basis function can yield much better resultsKeywords
This publication has 7 references indexed in Scilit:
- Networks for approximation and learningProceedings of the IEEE, 1990
- PREDICTING THE FUTURE: A CONNECTIONIST APPROACHInternational Journal of Neural Systems, 1990
- Fast Learning in Networks of Locally-Tuned Processing UnitsNeural Computation, 1989
- Phoneme classification experiments using radial basis functionsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- Interpolation of scattered data: Distance matrices and conditionally positive definite functionsConstructive Approximation, 1986
- Parallel Distributed ProcessingPublished by MIT Press ,1986
- A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)Journal of Dynamic Systems, Measurement, and Control, 1975