Abstract
A training algorithm for network of asynchronous learning-threshold elements is presented and analyzed. The algorithm is based on the Hebbian hypothesis, and it allows adaptation of the learning-network parameters to changing pattern environments. In particular, the network's properties can be quantified in environments where pattern occurrence is random, with nonequal, nonstationary probability distributions. The state reassessment probabilities of neurons during information retrieval can also be nonstationary and not equal for all neurons. The trained network is a content-addressable memory. The authors evaluate its stabilization properties with respect to a given set of patterns, using the theory of Markov processes. The results are applicable for the determination of efficient coding for information that has to be stored, and for prediction of actual pattern-retrieval capabilities of the trained network. The authors include the popular sum-of-outer-products assignment as an analyzable specific case of their training procedure, and allow the steady-state analysis of a large group of sigmoidal learning curves.

This publication has 12 references indexed in Scilit: