Absolutely stable learning of recognition codes by a self-organizing neural network

Abstract
A neural network which self‐organizes and self‐stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns is here outlined. Top‐down attentional and matching mechanisms are critical in self‐stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self‐stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes, or categories, without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if its shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context‐sensitive self‐scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived from the set of all input patterns that are ever experinced. Four types of attentional process—priming, gain control, vigilance, and intermodal competition—are mechanistically characterized. Top‐down priming and gain control are needed for code matching and self‐stabilization. Attentional vigilance determines how fine the learned categories will be. If vigilance increases due to an environment disconfirmation, then the system automatically searches for and learns finer recognition categories. A new nonlinear matching law (the 2/3 Rule) and new nonlinear associative laws (the Weber Law Rule, the Associative Decay Rule, and the Template Learning Rule) are needed to achieve these properties. All the rules describe emergent properties of parallel network interactions. The achitecture circumvents the staturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.

This publication has 0 references indexed in Scilit: