Abstract
Most properties of learning and generalization in linear perceptrons can be derived from the average response function G. We present a method for calculating G using only simple matrix identities and partial differential equations. Using this method, we first rederive the known result for G in the thermodynamic limit of perceptrons of infinite size N, which has previously been calculated using replica and diagrammatic methods. We also show explicitly that the response function is self-averaging in the thermodynamic limit. Extensions of our method to more general learning scenarios with anisotropic teacher-space priors, input distributions, and weight-decay terms are discussed. Finally, finite-size effects are considered by calculating the O(1/N) correction to G. We verify the result by computer simulations and discuss the consequences for generalization and learning dynamics in linear perceptrons of finite size.

This publication has 6 references indexed in Scilit: