Stabilizing classifiers for very small sample sizes
- 1 January 1996
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 2 (10514651) , 891-896 vol.2
- https://doi.org/10.1109/icpr.1996.547204
Abstract
In this paper the possibilities for constructing linear classifiers are considered for very small sample sizes. We propose a stability measure and present a study on the performance and stability of the following techniques: regularization by the ridge-estimate of the covariance matrix, bootstrapping followed by aggregation ("bagging") and editing combined with pseudo-inversion. It is shown that by these techniques a smooth transition can be made between the nearest mean classifier and the Fisher discriminant (1936, 1940) based on large samples sizes. Especially for highly correlated data very good results are obtained compared with the nearest mean method.Keywords
This publication has 12 references indexed in Scilit:
- Discriminant AnalysisPublished by Wiley ,2004
- Bagging predictorsMachine Learning, 1996
- An experimental comparison of neural classifiers with ‘traditional’ classifiersPublished by Elsevier ,1994
- An Introduction to the BootstrapPublished by Springer Nature ,1993
- Small sample size effects in statistical pattern recognition: recommendations for practitionersPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1991
- Regularized Discriminant AnalysisJournal of the American Statistical Association, 1989
- The Use of Shrinkage Estimators in Linear Discriminant AnalysisPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1982
- 39 Dimensionality and sample size considerations in pattern recognition practicePublished by Elsevier ,1982
- On Dimensionality, Sample Size, Classification Error, and Complexity of Classification Algorithm in Pattern RecognitionIEEE Transactions on Pattern Analysis and Machine Intelligence, 1980
- THE PRECISION OF DISCRIMINANT FUNCTIONSAnnals of Eugenics, 1940