Nonlinear generalizations of principal component learning algorithms

Abstract
In this paper, we introduce and study nonlinear generalizations of several neural algorithms that learn the principal eigenvectors of the data covariance matrix. We first consider the robust versions that optimize a nonquadratic criterion under orthonormality constraints. As an important byproduct, Sanger's GHA and Oja's SGA algorithms for learning principal components are derived from a natural optimization problem. We also introduce a fully nonlinear generalization that has signal separation capabilities not possessed by standard principal component analysis learning algorithms.