On the monotonicity of the performance of Bayesian classifiers (Corresp.)
- 1 May 1978
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Information Theory
- Vol. 24 (3) , 392-394
- https://doi.org/10.1109/tit.1978.1055877
Abstract
Even with a finite set of training samples, the performance of a Bayesian classifier can not be degraded by increasing the number of features, as long as the old features are recoverable from the new features. This is true even for the general Bayesian classifiers investigated by qq Hughes, a result which contradicts previous interpretations of Hughes' model. The reasons for these difficulties are discussed. It would appear that the peaking behavior of practical classifiers is caused principally by their nonoptimal use of the features.Keywords
This publication has 5 references indexed in Scilit:
- On dimensionality and sample size in statistical pattern classificationPattern Recognition, 1971
- Independence of measurements and the mean recognition accuracyIEEE Transactions on Information Theory, 1971
- The mean accuracy of pattern recognizers with many pattern classes (Corresp.)IEEE Transactions on Information Theory, 1969
- Comments on "On the mean accuracy of statistical pattern recognizers" by Hughes, G. F.IEEE Transactions on Information Theory, 1969
- On the mean accuracy of statistical pattern recognizersIEEE Transactions on Information Theory, 1968