Optimal generalization in perceptions
- 7 December 1992
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 25 (23) , 6243-6250
- https://doi.org/10.1088/0305-4470/25/23/020
Abstract
A new learning algorithm for the one-layer perceptron is presented. It aims to maximize the generalization gain per example. Analytical results are obtained for the case of single presentation of each example. The weight attached to a Hebbian term is a function of the expected stability of the example in the teacher perceptron. This leads to the obtention of upper bounds for the generalization ability. This scheme can be iterated and the results of numerical simulations show that it converges, within errors, to the theoretical optimal generalization ability of the Bayes algorithm. Analytical and numerical results for an algorithm with maximized generalization in the learning strategy with selection of examples are obtained and it is proved that, as expected, orthogonal selection is optimal. Exponential decay of the generalization error is obtained for the single presentation of selected examples.Keywords
This publication has 10 references indexed in Scilit:
- Biased learning in Boolean perceptronsPhysica A: Statistical Mechanics and its Applications, 1992
- Statistical mechanics of learning from examplesPhysical Review A, 1992
- Selecting examples for perceptronsJournal of Physics A: General Physics, 1992
- Generalization performance of Bayes optimal classification algorithm for learning a perceptronPhysical Review Letters, 1991
- Improving a Network Generalization Ability by Selecting ExamplesEurophysics Letters, 1990
- On the ability of the optimal perceptron to generaliseJournal of Physics A: General Physics, 1990
- Recognition rates of the Hebb rule for learning Boolean functionsPhysical Review A, 1990
- Learning in neural network memoriesNetwork: Computation in Neural Systems, 1990
- The Hebb Rule for Learning Linearly Separable Boolean Functions: Learning and GeneralizationEurophysics Letters, 1989
- A theory of the learnableCommunications of the ACM, 1984