Successive learning of linear discriminant analysis: Sanger-type algorithm
- 11 November 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 2 (10514651) , 664-667
- https://doi.org/10.1109/icpr.2000.906162
Abstract
Linear discriminant analysis (LDA) is applied to broad areas, e.g. image recognition. However, successive learning algorithms for LDA are not sufficiently studied while they have been well established for principal component analysis (PCA). A successive learning algorithm which does not need N/spl times/N matrices has been proposed for LDA (Hiraoka and Hamahira, 1999, and Hiraoka et al., 2000), where N is the dimension of data. In the present paper, an improvement of this algorithm is examined based on Sanger's (1989) idea. By the original algorithm, we can obtain only the subspace which is spanned by major eigenvectors. On the other hand, we can obtain major eigenvectors themselves by the improved algorithm.Keywords
This publication has 6 references indexed in Scilit:
- Gesture recognition using HLAC features of PARCOR images and HMM based recognizerPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Convergence analysis of online linear discriminant analysisPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2000
- On self-organizing algorithms and networks for class-separability featuresIEEE Transactions on Neural Networks, 1997
- Artificial neural networks for feature extraction and multivariate data projectionIEEE Transactions on Neural Networks, 1995
- Optimal unsupervised learning in a single-layer linear feedforward neural networkNeural Networks, 1989
- Simplified neuron model as a principal component analyzerJournal of Mathematical Biology, 1982