‘‘Cavity-approach’’ analysis of the neural-network learning problem
- 1 June 1993
- journal article
- research article
- Published by American Physical Society (APS) in Physical Review E
- Vol. 47 (6) , 4496-4513
- https://doi.org/10.1103/physreve.47.4496
Abstract
We apply a ‘‘cavity-type’’ method for the analysis of the learning ability of single- and multilayer perceptrons. We show that the mean-field equations obtained in this way, which are identical to the equations derived previously by the replica method, describe not only the properties of the optimal network, but also a learning process which leads to this network. We discuss the applicability of our ideas to the construction of learning algorithms. Our interpretation of the mean-field theory also leads naturally to a new concept, ‘‘flexibility,’’ which is a measure of the ability of the network to learn.Keywords
This publication has 11 references indexed in Scilit:
- Two-layer perceptrons at saturationPhysical Review A, 1992
- Learning and retrieval in attractor neural networks above saturationJournal of Physics A: General Physics, 1991
- The AdaTron: An Adaptive Perceptron AlgorithmEurophysics Letters, 1989
- Optimal basins of attraction in randomly sparse neural network modelsJournal of Physics A: General Physics, 1989
- The space of interactions in neural networks: Gardner's computation with the cavity methodJournal of Physics A: General Physics, 1989
- Learning in Neural Networks: Solvable DynamicsEurophysics Letters, 1989
- Optimal storage properties of neural network modelsJournal of Physics A: General Physics, 1988
- The space of interactions in neural network modelsJournal of Physics A: General Physics, 1988
- Dynamics of spin systems with randomly asymmetric bonds: Langevin dynamics and a spherical modelPhysical Review A, 1987
- Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern RecognitionIEEE Transactions on Electronic Computers, 1965