Generalization performance of Bayes optimal classification algorithm for learning a perceptron
- 20 May 1991
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review Letters
- Vol. 66 (20) , 2677-2680
- https://doi.org/10.1103/physrevlett.66.2677
Abstract
The generalization error of the Bayes optimal classification algorithm when learning a perceptron from noise-free random training examples is calculated exactly using methods of statistical mechanics. It is shown that if an assumption of replica symmetry is made, then, in the thermodynamic limit, the error of the Bayes optimal algorithm is less than the error of a canonical stochastic learning algorithm, by a factor approaching √2 as the ratio of the number of training examples to perceptron weights grows. In addition, it is shown that approximations to the generalization error of the Bayes optimal algorithm can be achieved by learning algorithms that use a two-layer neutral net to learn a perceptron.Keywords
This publication has 11 references indexed in Scilit:
- Learning from examples in large neural networksPhysical Review Letters, 1990
- Inference of a rule by a neural network with thermal noisePhysical Review Letters, 1990
- On the ability of the optimal perceptron to generaliseJournal of Physics A: General Physics, 1990
- First-order transition to perfect generalization in a neural network with binary synapsesPhysical Review A, 1990
- Learning from Examples in a Single-Layer Neural NetworkEurophysics Letters, 1990
- Linear and Nonlinear Extension of the Pseudo-Inverse Solution for Learning Boolean FunctionsEurophysics Letters, 1989
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- Optimal storage properties of neural network modelsJournal of Physics A: General Physics, 1988
- The space of interactions in neural network modelsJournal of Physics A: General Physics, 1988
- Generalization as searchArtificial Intelligence, 1982