Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions
- 1 January 1998
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 9 (1) , 224-229
- https://doi.org/10.1109/72.655045
Abstract
It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x(i),t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (x(i),t(i)) with zero error. The previous method of arbitrarily choosing weights is not feasible for any SLFN. The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFNs with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature.Keywords
This publication has 17 references indexed in Scilit:
- Neural networks, approximation theory, and finite precision computationNeural Networks, 1995
- Constructive neural networks with piecewise interpolation capabilities for function approximationsIEEE Transactions on Neural Networks, 1994
- Multilayer feedforward networks with a nonpolynomial activation function can approximate any functionNeural Networks, 1993
- Approximation of continuous functions on Rd by linear combinations of shifted rotations of a sigmoid function with and without scalingNeural Networks, 1992
- Kolmogorov's theorem and multilayer neural networksNeural Networks, 1992
- Approximation of functions on a compact set by finite sums of a sigmoid function without scalingNeural Networks, 1991
- A simple method to derive bounds on the size and to train multilayer neural networksIEEE Transactions on Neural Networks, 1991
- Approximation capabilities of multilayer feedforward networksNeural Networks, 1991
- On the approximate realization of continuous mappings by neural networksNeural Networks, 1989
- Multilayer feedforward networks are universal approximatorsNeural Networks, 1989