Abstract
Learning in a generalised perceptron neutral network model is investigated by numerical simulation. It is found that the distribution of learning times is very broad and spreads out as the system size is increased. The mean number of steps, k, to learn a first-order task is found to increase with system size according to a power law (k) approximately Nalpha , alpha =1.86+or-0.05.

This publication has 3 references indexed in Scilit: