Principal manifolds and Bayesian subspaces for visual recognition
- 1 January 1999
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 2, 1131-1136 vol.2
- https://doi.org/10.1109/iccv.1999.790407
Abstract
We investigate the use of linear and nonlinear principal manifolds for learning low dimensional representations for visual recognition. Three techniques: principal component analysis (PCA), independent component analysis (ICA) and nonlinear PCA (NLPCA) are examined and tested in a visual recognition experiment using a large gallery of facial images from the "FERET" database. We compare the recognition performance of a nearest neighbour matching rule with each principal manifold representation to that of a maximum a posteriori (MAP) matching rule using a Bayesian similarity measure derived from probabilistic subspaces, and demonstrate the superiority of the latter.Keywords
This publication has 23 references indexed in Scilit:
- Probabilistic visual learning for object detectionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Independent component representations for face recognitionPublished by SPIE-Intl Soc Optical Eng ,1998
- Probabilistic visual learning for object representationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1997
- Visual learning and recognition of 3-d objects from appearanceInternational Journal of Computer Vision, 1995
- Independent component analysis, A new concept?Signal Processing, 1994
- Active Shape Models - 'smart snakes'Published by British Machine Vision Association and Society for Pattern Recognition ,1992
- Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architectureSignal Processing, 1991
- Eigenfaces for RecognitionJournal of Cognitive Neuroscience, 1991
- Principal CurvesJournal of the American Statistical Association, 1989
- Projection PursuitThe Annals of Statistics, 1985