Function approximation with spiked random networks
- 1 January 1999
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 10 (1) , 3-9
- https://doi.org/10.1109/72.737488
Abstract
This paper examines the function approximation properties of the "random neural-network model" or GNN. The output of the GNN can be computed from the firing probabilities of selected neurons. We consider a feedforward Bipolar GNN (BGNN) model which has both "positive and negative neurons" in the output layer, and prove that the BGNN is a universal function approximator. Specifically, for any f is an element of C([0, 1]s) and any epsilon>0, we show that there exists a feedforward BGNN which approximates f uniformly with error less than epsilon. We also show that after some appropriate clamping operation on its output, the feedforward GNN is also a universal function approximator.Keywords
This publication has 18 references indexed in Scilit:
- Improved neural heuristics for multicast routingIEEE Journal on Selected Areas in Communications, 1997
- Traffic and video quality with adaptive neural compressionMultimedia Systems, 1996
- Low bit-rate video compression with neural networks and temporal subsamplingProceedings of the IEEE, 1996
- Neural network methods for volumetric magnetic resonance imaging of the human brainProceedings of the IEEE, 1996
- Wavelet neural networks for function learningIEEE Transactions on Signal Processing, 1995
- Rational Function Neural NetworkNeural Computation, 1993
- Learning in the Recurrent Random Neural NetworkNeural Computation, 1993
- Universal Approximation Using Radial-Basis-Function NetworksNeural Computation, 1991
- Stability of the Random Neural Network ModelNeural Computation, 1990
- On the approximate realization of continuous mappings by neural networksNeural Networks, 1989