Abstract
We apply a ‘‘cavity-type’’ method for the analysis of the learning ability of single- and multilayer perceptrons. We show that the mean-field equations obtained in this way, which are identical to the equations derived previously by the replica method, describe not only the properties of the optimal network, but also a learning process which leads to this network. We discuss the applicability of our ideas to the construction of learning algorithms. Our interpretation of the mean-field theory also leads naturally to a new concept, ‘‘flexibility,’’ which is a measure of the ability of the network to learn.