Systems that can learn from examples: Replica calculation of uniform convergence bounds for perceptrons
- 13 September 1993
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review Letters
- Vol. 71 (11) , 1772-1775
- https://doi.org/10.1103/physrevlett.71.1772
Abstract
The generalization abilities of neural networks for inferring a rule on the basis of examples can be characterized by the convergence of the learning error to the generalization error with increasing size of the training set. Using the replica technique, we calculate the maximum difference between training and generalization error for the ensemble of all perceptrons trained by a teacher perceptron and the maximal generalization error for the perceptrons that have a training error equal to zero. The results are compared with the rigorous bounds provided by the Vapnik-Chervonenkis theorem.Keywords
This publication has 10 references indexed in Scilit:
- Vapnik-Chervonenkis bounds for generalizationJournal of Physics A: General Physics, 1993
- The statistical mechanics of learning a ruleReviews of Modern Physics, 1993
- Optimal storage of a neural network model: a replica symmetry-breaking solutionJournal of Physics A: General Physics, 1993
- Statistical mechanics of learning from examplesPhysical Review A, 1992
- On the ability of the optimal perceptron to generaliseJournal of Physics A: General Physics, 1990
- Learnability and the Vapnik-Chervonenkis dimensionJournal of the ACM, 1989
- Three unfinished works on the optimal storage capacity of networksJournal of Physics A: General Physics, 1989
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- A theory of the learnableCommunications of the ACM, 1984
- On the Uniform Convergence of Relative Frequencies of Events to Their ProbabilitiesTheory of Probability and Its Applications, 1971