Sign-constrained linear learning and diluting in neural networks

Abstract
For neural networks with predefined effects of the synapses, excitatory or inhibitory, the simplex algorithm is applied as a learning rule. It is assumed that the given signs of the synapses can never be changed during learning. The maximum possible dilution of synapses, as a result of the learning, is found at the maximum storage capacity of a model with only positive or randomly distributed signs. For the case of infinitely many neurons, a replica symmetric calculation of the free energy and the distribution of coupling strengths is presented. The linear algorithm is also applied to networks with a more suitable choice of sign constraints, with the result of a higher storage capacity.

This publication has 12 references indexed in Scilit: