Abstract
The informational aspects of neural networks performing associative memory following Hebbian learning are analysed in detail. The recall process is decomposed into its recognition and error correction components, and the respective contributions are clarified and computed. The analysis identifies the principal sources of information loss. They are shown to be in the choice of the decoding procedure, in the indeterminacy introduced by zero modification states of the synapses and in the statistical dependences between different synaptic modification states. A wide range of storage schemes and decoding procedures are discussed and their optimal characteristics are compared and evaluated relative to the corresponding limits prescribed by Shannon's theorem.