Global analysis of Oja's flow for neural networks
- 1 January 1994
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 5 (5) , 674-683
- https://doi.org/10.1109/72.317720
Abstract
A detailed study of Oja's learning equation in neural networks is undertaken in this paper. Not only are such fundamental issues as existence, uniqueness, and representation of solutions completely resolved, but also the convergence issue is resolved. It is shown that the solution of Oja's equation is exponentially convergent to an equilibrium from any initial value. Moreover, the necessary and sufficient conditions are given on the initial value for the solution to converge to a dominant eigenspace of the associated autocorrelation matrix. As a by-product, this result confirms one of Oja's conjectures that the solution converges to the principal eigenspace from almost all initial values. Some other characteristics of the limiting solution are also revealed. These facilitate the determination of the limiting solution in advance using only the initial information. Two examples are analyzed demonstrating the explicit dependence of the limiting solution on the initial value. In another respect, it is found that Oja's equation is the gradient flow of generalized Rayleigh quotients on a Stiefel manifold.Keywords
This publication has 8 references indexed in Scilit:
- Dynamical systems that sort lists, diagonalize matrices and solve linear programming problemsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Convergence analysis of local feature extraction algorithmsNeural Networks, 1992
- Convergence of learning algorithms with constant learning ratesIEEE Transactions on Neural Networks, 1991
- NEURAL NETWORKS, PRINCIPAL COMPONENTS, AND SUBSPACESInternational Journal of Neural Systems, 1989
- Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuitIEEE Transactions on Circuits and Systems, 1986
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrixJournal of Mathematical Analysis and Applications, 1985
- Simplified neuron model as a principal component analyzerJournal of Mathematical Biology, 1982
- On the finite escape phenomena for matrix Riccati equationsIEEE Transactions on Automatic Control, 1982