An alternative proof of convergence for Kung-Diamantaras APEX algorithm
- 9 December 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The problem of adaptive principal components extraction (APEX) has gained much interest. In 1990, a new neuro-computation algorithm for this purpose was proposed by S. Y. Kung and K. I. Diamautaras. (see ICASSP 90, p.861-4, vol.2, 1990). An alternative proof is presented to illustrate that the K-D algorithm is in fact richer than has been proved before. The proof shows that the neural network will converge and the principal components can be extracted, without assuming that some of projections of synaptic weight vectors have diminished to zero. In addition, the authors show that the K-D algorithm converges exponentially.Keywords
This publication has 5 references indexed in Scilit:
- A neural network learning algorithm for adaptive principal component extraction (APEX)Published by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Constrained principal component analysis via an orthogonal learning networkPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Orthogonal learning network for constrained principal component problemPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrixJournal of Mathematical Analysis and Applications, 1985
- Simplified neuron model as a principal component analyzerJournal of Mathematical Biology, 1982