Robustness in neural computation: random graphs and sparsity
- 1 May 1992
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Information Theory
- Vol. 38 (3) , 1114-1119
- https://doi.org/10.1109/18.135650
Abstract
An attempt is made to mathematically codify the belief that fully interconnected neural networks continue to function efficiently in the presence of component damage. Component damage is introduced in a fully interconnected neural network model of n neurons by randomly deleting the links between neurons. An analysis of the outer-product algorithm for this random graph model of sparse interconnectivity yields the following result: If the probability of losing any given link between two neurons is 1- , then the outer-product algorithm can store on the order of pn/log pn2 stable memories correcting a linear number of random errors. In particular, the average degree of the interconnectivity graph dictates the memory storage capability, and functional storage of memories as stable states is feasible abruptly when the average number of neural interconnections retained by a neuron exceeds the order of log n links (of a total of n possible links) with other neuronsKeywords
This publication has 6 references indexed in Scilit:
- Memory capacity in neural network models: Rigorous lower boundsPublished by Elsevier ,2003
- Convergence results in an associative memory modelNeural Networks, 1988
- Effect of connectivity in associative memory modelsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988
- The capacity of the Hopfield associative memoryIEEE Transactions on Information Theory, 1987
- Storing Infinite Numbers of Patterns in a Spin-Glass Model of Neural NetworksPhysical Review Letters, 1985
- Neural networks and physical systems with emergent collective computational abilities.Proceedings of the National Academy of Sciences, 1982