Abstract
The learning associated with radial basis function networks is investigated. The author examines the method of gradient descent for learning the weights and discusses the nature of the learning process. The results obtained explain why the networks learn and the nonuniform convergence for different frequencies. It is also found that the choice of Gaussians as the basis functions may not be effective in dealing with the high-frequency components of a complicated mapping due to excessively slow convergence. Computer simulation is used to show that some simple choice of a basis function can yield much better results

This publication has 7 references indexed in Scilit: