Comments on "Approximation capability in C(R(n)) by multilayer feedforward networks and related problems".
- 1 July 1998
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 9 (4) , 714-5
- https://doi.org/10.1109/72.701184
Abstract
In the above paper Chen et al. investigated the capability of uniformly approximating functions in C(Rn) by standard feedforward neural networks. They found that the boundedness condition on the sigmoidal function plays an essential role in the approximation, and conjectured that the boundedness of the sigmoidal function is a necessary and sufficient condition for the validity of the approximation theorem. However, we find that the conjecture is not correct, that is, the boundedness condition is not sufficient or necessary in C(Rn). Instead, boundedness and unequal limits at infinities conditions on the activation functions are sufficient, but not necessary in C(Rn).Keywords
This publication has 8 references indexed in Scilit:
- Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networksPublished by Elsevier ,2003
- Approximation Capability of Layered Neural Networks with Sigmoid Units on Two LayersNeural Computation, 1994
- Multilayer feedforward networks with a nonpolynomial activation function can approximate any functionNeural Networks, 1993
- Approximation of continuous functions on Rd by linear combinations of shifted rotations of a sigmoid function with and without scalingNeural Networks, 1992
- Approximation capabilities of multilayer feedforward networksNeural Networks, 1991
- Approximation by superpositions of a sigmoidal functionMathematics of Control, Signals, and Systems, 1989
- On the approximate realization of continuous mappings by neural networksNeural Networks, 1989
- Multilayer feedforward networks are universal approximatorsNeural Networks, 1989