Weight Space Structure and Internal Representations: a Direct Approach to Learning and Generalization in Multilayer Neural Network
Preprint
- 18 January 1995
Abstract
We analytically derive the geometrical structure of the weight space in multilayer neural networks (MLN), in terms of the volumes of couplings associated to the internal representations of the training set. Focusing on the parity and committee machines, we deduce their learning and generalization capabilities both reinterpreting some known properties and finding new exact results. The relationship between our approach and information theory as well as the Mitchison--Durbin calculation is established. Our results are exact in the limit of a large number of hidden units, showing that MLN are a class of exactly solvable models with a simple interpretation of replica symmetry breaking.Keywords
All Related Versions
- Version 1, 1995-01-18, ArXiv
- Published version: Physical Review Letters, 75 (12), 2432.
This publication has 0 references indexed in Scilit: