Stabilization of Hebbian neural nets by inhibitory learning

Abstract
In Hebbian neural models synaptic reinforcement occurs when the pre- and post-synaptic neurons are simultaneously active. This causes an instability toward unlimited growth of excitatory synapses. The system can be stabilized by recurrent inhibition via modifiable inhibitory synapses. When this process is included, it is possible to dispense with the non-linear normalization or cut-off conditions which were necessary for stability in previous models. The present formulation is response-linear if synaptic changes are slow. It is self-consistent because the stabilizing effects will tend to keep most neural activity in the middle range, where neural response is approximately linear. The linearized equations are tensor invariant under a class of rotations of the state space. Using this, the response to stimulation may be derived as a set of independent modes of activity distributed over the net, which may be identified with cell assemblies. A continuously infinite set of equivalent solutions exists.