Extending memory in the neuromic array

Abstract
The authors describe an algorithm which stores information and provides a learning capability. This algorithm is affiliated with a class of neural networks which are referred to as neuromic arrays. The algorithms may be viewed as a generalized training procedure for the affiliated networks. The algorithm is demonstrated via simulation for several binary vectoral training sets. It is shown that the binary universe (-1,1)/sup n/ can be taken as the training set. Simulations which demonstrate learning for the n=4, 6, 8 cases are presented. The problem of subdividing the binary universe into a disjoint cover of training subsets is considered. Simulations are used to explore the robustness and recognition performance of the training algorithm when applied to the binary universe and/or its disjoint covers.

This publication has 9 references indexed in Scilit: