Tilinglike learning in the parity machine

Abstract
An algorithm for the training of multilayered feedforward neural networks is presented. The strategy is very similar to the well-known tiling algorithm, yet the resulting architecture is completely different. New hidden units are added to one layer only in order to correct the errors of the previous ones; standard perceptron learning can be applied. The output of the network is given by the product of these k (±1) neurons (parity machine). In a special case with two hidden units, the capacity αc and stability of the network can be derived exactly by means of a replica-symmetric calculation. Correlations between the two sets of couplings vanish exactly. For the case of arbitrary k, estimates of αc are given. The asymptotic capacity per input neuron of a network trained according to the proposed algorithm is found to be αck lnk for k→∞ in the estimation. This is in agreement with recent analytic results for the algorithm-independent capacity of a parity machine.

This publication has 16 references indexed in Scilit: