Discriminant analysis and eigenspace partition tree for face and object recognition from views
- 24 December 2002
- proceedings article
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 192-197
- https://doi.org/10.1109/afgr.1996.557263
Abstract
The method we have been using is based on our Self-Organizing Hierarchical Optimal Subspace Learning and Inference Framework (SHOSLIF). It uses the theories of linear discriminant projection for automatic optimal feature selection in each of the internal nodes of a Space-Tessellation Tree. In this paper, we present our recent study on the applicability of the approach to variability in position, size, and 3D orientation. In the work presented here, we require "well-framed" images os input for recognition. By well-framed images we mean that only a relatively small variation in the size, position, and orientation of the objects in the input images is allowed. We report the experimental results that show the performance difference between the subspaces of linear discriminant analysis and the principle component analysis and the effect of using a tree as opposed to a flat eigenspace.Keywords
This publication has 7 references indexed in Scilit:
- Nonlinear manifold learning for visual speech recognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- View-based and modular eigenspaces for face recognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1994
- Illumination planning for object recognition in structured environmentsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1994
- Eigenfaces for RecognitionJournal of Cognitive Neuroscience, 1991
- Application of the Karhunen-Loeve procedure for the characterization of human facesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- Principal Component AnalysisPublished by Springer Nature ,1986
- THE STATISTICAL UTILIZATION OF MULTIPLE MEASUREMENTSAnnals of Eugenics, 1938