Nonlinear generalizations of principal component learning algorithms
- 24 August 2005
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 3, 2599-2602
- https://doi.org/10.1109/ijcnn.1993.714256
Abstract
In this paper, we introduce and study nonlinear generalizations of several neural algorithms that learn the principal eigenvectors of the data covariance matrix. We first consider the robust versions that optimize a nonquadratic criterion under orthonormality constraints. As an important byproduct, Sanger's GHA and Oja's SGA algorithms for learning principal components are derived from a natural optimization problem. We also introduce a fully nonlinear generalization that has signal separation capabilities not possessed by standard principal component analysis learning algorithms.Keywords
This publication has 4 references indexed in Scilit:
- Representation and separation of signals using nonlinear PCA type learningNeural Networks, 1994
- Principal components, minor components, and linear neural networksNeural Networks, 1992
- OPTIMAL HIDDEN UNITS FOR TWO-LAYER NONLINEAR FEEDFORWARD NEURAL NETWORKSInternational Journal of Pattern Recognition and Artificial Intelligence, 1991
- Optimal unsupervised learning in a single-layer linear feedforward neural networkNeural Networks, 1989