Abstract
An experimental hardware neural network built of cascadable, analogue CMOS test chips has been successfully trained by a host computer on the sunspot benchmark series using hardware-in-the-loop backpropagation learning. Comparisons are made with an ideal software net. Experiments show the advantage of training with recall-mode weight resolutions when having an extra ‘high’ precision weight representation for the weight updating. Hyperbolic tangent neuron outputs are used directly to calculate the neuron derivatives. To avoid false negative derivatives due to output offsets, the outputs are scaled. Considering hardware learning implementation, common output scaling is shown to be feasible.

This publication has 7 references indexed in Scilit: