Abstract
An attempt is made to mathematically codify the belief that fully interconnected neural networks continue to function efficiently in the presence of component damage. Component damage is introduced in a fully interconnected neural network model of n neurons by randomly deleting the links between neurons. An analysis of the outer-product algorithm for this random graph model of sparse interconnectivity yields the following result: If the probability of losing any given link between two neurons is 1- , then the outer-product algorithm can store on the order of pn/log pn2 stable memories correcting a linear number of random errors. In particular, the average degree of the interconnectivity graph dictates the memory storage capability, and functional storage of memories as stable states is feasible abruptly when the average number of neural interconnections retained by a neuron exceeds the order of log n links (of a total of n possible links) with other neurons

This publication has 6 references indexed in Scilit: