Learning with noise in a linear perceptron
- 7 March 1992
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 25 (5) , 1119-1133
- https://doi.org/10.1088/0305-4470/25/5/019
Abstract
The learning of a set of p random patterns in a linear perceptron is studied in the limit of a large (N) of input units with noise on the weights, inputs and output. The problem is formulated in continuous time as a Langevin equation, and the first task is to evaluate the response or Green function for the system. White noise on the output is shown to correspond to spatially correlated weight noise acting only in a subspace of the weight space. It is shown that the input noise acts as a simple weight decay with a size proportional to the load parameter alpha =p/N. With no weight decay, the relaxation time diverges at alpha =1. With a weight decay it becomes shorter, and finite for alpha =1, but at the cost of a larger asymptotic learning error that is found analytically. It is shown that a small weight decay decreases the effect of noise on the weights or outputs.Keywords
This publication has 8 references indexed in Scilit:
- Statistical mechanics of neural networks near saturationPublished by Elsevier ,2004
- Generalization in a linear perceptron in the presence of noiseJournal of Physics A: General Physics, 1992
- A statistical approach to learning and generalization in layered neural networksProceedings of the IEEE, 1990
- The Langevin method in the statistical dynamics of learningJournal of Physics A: General Physics, 1990
- Phase transitions in simple learningJournal of Physics A: General Physics, 1989
- Learning in Neural Networks: Solvable DynamicsEurophysics Letters, 1989
- The space of interactions in neural network modelsJournal of Physics A: General Physics, 1988
- Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern RecognitionIEEE Transactions on Electronic Computers, 1965