Abstract
The generalization abilities of neural networks for inferring a rule on the basis of examples can be characterized by the convergence of the learning error to the generalization error with increasing size of the training set. Using the replica technique, we calculate the maximum difference between training and generalization error for the ensemble of all perceptrons trained by a teacher perceptron and the maximal generalization error for the perceptrons that have a training error equal to zero. The results are compared with the rigorous bounds provided by the Vapnik-Chervonenkis theorem.

This publication has 10 references indexed in Scilit: