Learning Population Codes by Minimizing Description Length
- 1 May 1995
- journal article
- Published by MIT Press in Neural Computation
- Vol. 7 (3) , 549-564
- https://doi.org/10.1162/neco.1995.7.3.549
Abstract
The minimum description length (MDL) principle can be used to train the hidden units of a neural network to extract a representation that is cheap to describe but nonetheless allows the input to be reconstructed accurately. We show how MDL can be used to develop highly redundant population codes. Each hidden unit has a location in a low-dimensional implicit space. If the hidden unit activities form a bump of a standard shape in this space, they can be cheaply encoded by the center of this bump. So the weights from the input units to the hidden units in an autoencoder are trained to make the activities form a standard bump. The coordinates of the hidden units in the implicit space are also learned, thus allowing flexibility, as the network develops a discontinuous topography when presented with different input classes.Keywords
This publication has 7 references indexed in Scilit:
- Arbitrary Elastic Topologies and Ocular DominanceNeural Computation, 1993
- Derivation of a class of training algorithmsIEEE Transactions on Neural Networks, 1990
- Application of the elastic net algorithm to the formation of ocular dominance stripesNetwork: Computation in Neural Systems, 1990
- Dimensionality-reduction using connectionist networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- An analogue approach to the travelling salesman problem using an elastic net methodNature, 1987
- Neuronal Population Coding of Movement DirectionScience, 1986
- Self-organized formation of topologically correct feature mapsBiological Cybernetics, 1982