Abstract
The problem of adaptive principal components extraction (APEX) has gained much interest. In 1990, a new neuro-computation algorithm for this purpose was proposed by S. Y. Kung and K. I. Diamautaras. (see ICASSP 90, p.861-4, vol.2, 1990). An alternative proof is presented to illustrate that the K-D algorithm is in fact richer than has been proved before. The proof shows that the neural network will converge and the principal components can be extracted, without assuming that some of projections of synaptic weight vectors have diminished to zero. In addition, the authors show that the K-D algorithm converges exponentially.

This publication has 5 references indexed in Scilit: