Abstract
A coding method, distributed normalisation, is presented to speed up the training process of a back-propagation neural network classifier. In contrast to one-node normalisation coding, the values of the feature variables are distributed over a number of input nodes to increase the representation range of certain parts of each feature variable. A distinct advantage of this coding method is its ability to maintain the generalisation capability of one-node normalisation coding.

This publication has 1 reference indexed in Scilit: