OPTIMAL HIDDEN UNITS FOR TWO-LAYER NONLINEAR FEEDFORWARD NEURAL NETWORKS
- 1 October 1991
- journal article
- Published by World Scientific Pub Co Pte Ltd in International Journal of Pattern Recognition and Artificial Intelligence
- Vol. 5 (4) , 545-561
- https://doi.org/10.1142/s0218001491000314
Abstract
The output layer of a feedforward neural network approximates nonlinear functions as a linear combination of a fixed set of basis functions, or "features". These features are learned by the hidden-layer units, often by a supervised algorithm such as a back-propagation algorithm. This paper investigates features which are optimal for computing desired output functions from a given distribution of input data, and which must therefore be learned using a mixed supervised and unsupervised algorithm. A definition is proposed for optimal nonlinear features, and a constructive method, which has an iterative implementation, is derived for finding them. The learning algorithm always converges to a global optimum and the resulting network uses two layers to compute the hidden units. The general form of the features is derived for the case of continuous signal input, and this result is related to transmission of information through a bandlimited channel. The results of other algorithms can he compared to the optimal features, which in some cases have easily computed closed-form solutions. The application of this technique to the inverse kinematics problem for a simulated planar two-joint robot arm is demonstrated here.Keywords
This publication has 0 references indexed in Scilit: