Abstract
The error back-propagation algorithm for perceptrons is studied, and an extension of this algorithm that features selective learning is introduced. In selective learning, one of two selection criteria is used to screen the input data to improve the convergence property of the back-propagation algorithm. An associative content addressable memory using multilayer perceptrons is devised to demonstrate the improver convergence.< >

This publication has 14 references indexed in Scilit: