Neural nets for massively parallel optimization
- 1 July 1992
- proceedings article
- Published by SPIE-Intl Soc Optical Eng
- Vol. 1710, 229-238
- https://doi.org/10.1117/12.140088
Abstract
To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.Keywords
This publication has 0 references indexed in Scilit: