Learning a rule in a multilayer neural network
- 7 November 1993
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 26 (21) , 5781-5794
- https://doi.org/10.1088/0305-4470/26/21/017
Abstract
The problem of learning from examples in multilayer networks is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error of a fully connected committee machine in the limit of a large number of hidden units. If the number of training examples is proportional to the number of inputs in the network, the generalization error as a function of the training set size approaches a finite value. If the number of training examples is proportional to the number of weights in the network we find first-order phase transitions with a discontinuous drop in the generalization error for both binary and continuous weights.Keywords
This publication has 10 references indexed in Scilit:
- The statistical mechanics of learning a ruleReviews of Modern Physics, 1993
- Generalization in Fully Connected Committee MachinesEurophysics Letters, 1993
- Memorization Without Generalization in a Multilayered Neural NetworkEurophysics Letters, 1992
- Generalization in a Large Committee MachineEurophysics Letters, 1992
- Generalization properties of multilayered neural networksJournal of Physics A: General Physics, 1992
- Storage capacity and learning algorithms for two-layer neural networksPhysical Review A, 1992
- Statistical mechanics of learning from examplesPhysical Review A, 1992
- Broken symmetries in multilayered perceptronsPhysical Review A, 1992
- Learning in a Two-Layer Neural Network of Edge DetectorsEurophysics Letters, 1990
- The space of interactions in neural network modelsJournal of Physics A: General Physics, 1988