The Use of Chance Corrected Percentage of Agreement to Interpret the Results of a Discriminant Analysis

Abstract
Most programs for performing discriminant analysis provide a summary table of hits and misses in predicting group membership by using the discriminant function. The interpretation of such tables can be enhanced greatly by computing Cohen's kappa, κ, the chance corrected percentage of agreement between actual and predicted group membership. The standard error of kappa can be used to set confidence limits for the accuracy of the discriminant prediction and to test the difference in predictive accuracy for two independent samples. This was demonstrated in this article, using data previously published in a more preliminary form.

This publication has 1 reference indexed in Scilit: