Reducing neuron gain to eliminate fixed-point attractors in an analog associative memory

Abstract
We show analytically that the expected number of fixed-point attractors in an analog associative memory neural network increases exponentially with network size with a scaling exponent that depends on the ratio of stored memories to neurons and on the maximum slope, or gain, of the neuron transfer function. The scaling exponent decreases with gain for a sigmoidal transfer function, indicating that gain reduction can improve computational performance by eliminating spurious fixed points. Numerical data based on fixed-point counts in small networks support the analytical results.