Optimal generalization in perceptions

Abstract
A new learning algorithm for the one-layer perceptron is presented. It aims to maximize the generalization gain per example. Analytical results are obtained for the case of single presentation of each example. The weight attached to a Hebbian term is a function of the expected stability of the example in the teacher perceptron. This leads to the obtention of upper bounds for the generalization ability. This scheme can be iterated and the results of numerical simulations show that it converges, within errors, to the theoretical optimal generalization ability of the Bayes algorithm. Analytical and numerical results for an algorithm with maximized generalization in the learning strategy with selection of examples are obtained and it is proved that, as expected, orthogonal selection is optimal. Exponential decay of the generalization error is obtained for the single presentation of selected examples.

This publication has 10 references indexed in Scilit: