On Functional Approximation with Normalized Gaussian Units
- 1 March 1994
- journal article
- Published by MIT Press in Neural Computation
- Vol. 6 (2) , 319-333
- https://doi.org/10.1162/neco.1994.6.2.319
Abstract
Feedforward neural networks with a single hidden layer using normalized gaussian units are studied. It is proved that such neural networks are capable of universal approximation in a satisfactory sense. Then, a hybrid learning rule as per Moody and Darken that combines unsupervised learning of hidden units and supervised learning of output units is considered. By using the method of ordinary differential equations for adaptive algorithms (ODE method) it is shown that the asymptotic properties of the learning rule may be studied in terms of an autonomous cascade of dynamical systems. Some recent results from Hirsch about cascades are used to show the asymptotic stability of the learning rule.Keywords
This publication has 15 references indexed in Scilit:
- Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networksPublished by Elsevier ,2003
- Predicting the Future: Advantages of Semilocal UnitsNeural Computation, 1991
- Approximation capabilities of multilayer feedforward networksNeural Networks, 1991
- Layered Neural Networks with Gaussian Hidden Units as Universal ApproximationsNeural Computation, 1990
- Approximation by superpositions of a sigmoidal functionMathematics of Control, Signals, and Systems, 1989
- On the approximate realization of continuous mappings by neural networksNeural Networks, 1989
- Multilayer feedforward networks are universal approximatorsNeural Networks, 1989
- Convergent activation dynamics in continuous time networksNeural Networks, 1989
- Analysis of recursive stochastic algorithmsIEEE Transactions on Automatic Control, 1977
- Adaptive pattern classification and universal recoding: I. Parallel development and coding of neural feature detectorsBiological Cybernetics, 1976