Abstract
The simple neural network composed of layers of "neurons" which threshold the sum of weighted input values received from the neurons of the immediately preceding layer is directly applicable to many signal detection tasks. Use of the maximum likelihood "ideal observer" formalism allows for a quantitative measurement of neural-network performance for such tasks, in contrast to measures of network convergence such as sum of squares error. For the signal known exactly (SKE) problem, for which a known signal is detected on a noisy background, a single-layered neural network with ideal weights performs at 100% efficiency, since it is isomorphic to the matched-filter ideal observer. Measured efficiencies of less than 100% primarily reflect idiosyncrasies of the training method (e.g., back projection) and incomplete training of the network. For more complicated problems which have terms in the ideal test statistic that are quadratic or higher in the data, more complex neural-network architectures are required. Even then, however, convergence to optimal solutions is not assured. Using the feed-forward processing, back-projection training paradigm leads to far from ideal performance for these higher order tasks, at least when training is carried out on noise-free data or small sets of noisy data.

This publication has 0 references indexed in Scilit: