Convexity, internal representations and the statistical mechanics of neural networks
- 1 January 1997
- journal article
- Published by IOP Publishing in Europhysics Letters
- Vol. 37 (1) , 31-36
- https://doi.org/10.1209/epl/i1997-00113-x
Abstract
We present an approach to the statistical mechanics of feedforward neural networks which is based on counting realizable internal representations by utilizing convexity properties of the weight space. For a toy model, our method yields storage capacities based on an annealed approximation, which are in close agreement to one step replica symmetry breaking results obtained from a standard approach. For a single layer perceptron, a combinatorial result for the number of realizable output combinations is recovered and generalized to fixed stabilities.Keywords
This publication has 17 references indexed in Scilit:
- Statistical Mechanics of GeneralizationPublished by Springer Nature ,1996
- Statistical physics estimates for the complexity of feedforward neural networksPhysical Review E, 1995
- Domains of Solutions and Replica Symmetry Breaking in Multilayer Neural NetworksEurophysics Letters, 1994
- The statistical mechanics of learning a ruleReviews of Modern Physics, 1993
- Storage capacity and learning algorithms for two-layer neural networksPhysical Review A, 1992
- Statistical mechanics of learning from examplesPhysical Review A, 1992
- Broken symmetries in multilayered perceptronsPhysical Review A, 1992
- Statistical mechanics of a multilayered neural networkPhysical Review Letters, 1990
- Bounds on the learning capacity of some multi-layer networksBiological Cybernetics, 1989
- The space of interactions in neural network modelsJournal of Physics A: General Physics, 1988