Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions
- 1 January 1989
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 613-617 vol.1
- https://doi.org/10.1109/ijcnn.1989.118640
Abstract
K.M. Hornik, M. Stinchcombe, and H. White (Univ. of California at San Diego, Dept. of Economics Discussion Paper, June 1988; to appear in Neural Networks) showed that multilayer feedforward networks with as few as one hidden layer, no squashing at the output layer, and arbitrary sigmoid activation function at the hidden layer are universal approximators: they are capable of arbitrarily accurate approximation to arbitrary mappings, provided sufficiently many hidden units are available. The present authors obtain identical conclusions but do not require the hidden-unit activation to be sigmoid. Instead, it can be a rather general nonlinear function. Thus, multilayer feedforward networks possess universal approximation capabilities by virtue of the presence of intermediate layers with sufficiently many parallel processors; the properties of the intermediate-layer activation function are not so crucial. In particular, sigmoid activation functions are not necessary for universal approximation.Keywords
This publication has 2 references indexed in Scilit:
- Multilayer feedforward networks are universal approximatorsNeural Networks, 1989
- There exists a neural network that does not make avoidable mistakesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988