Combining independent and unbiased classifiers using weighted average
- 11 November 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 2 (10514651) , 495-498 vol.2
- https://doi.org/10.1109/icpr.2000.906120
Abstract
In a classification problem, improved accuracy can be obtained in many situations by using the combination of several classifiers instead of a single one. Turner and Gosh (1999) derived the error reduction that can be obtained by combining unbiased classifiers with independent errors using a simple average. We present an extension of this result by finding the improvement obtained when combining classifiers using weighted average. We also prove that for unbiased classifiers with independent errors the best combination of N classifiers corresponds to a weighted average, where the combination coefficient of each classifier is equal to 1/N. This means that in these cases the simple average should be used. We present experiments illustrating our results.Keywords
This publication has 8 references indexed in Scilit:
- Combining independent and unbiased classifiers using weighted averagePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Popular Ensemble Methods: An Empirical StudyJournal of Artificial Intelligence Research, 1999
- On combining classifiersPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1998
- Optimal Linear Combinations of Neural NetworksNeural Networks, 1997
- Boundary variance reduction for improved classification through hybrid networksPublished by SPIE-Intl Soc Optical Eng ,1995
- Democracy in neural nets: Voting schemes for classificationNeural Networks, 1994
- Neural Network Classifiers Estimate Bayesian a posteriori ProbabilitiesNeural Computation, 1991
- Neural network ensemblesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990