Abstract
Classifications based on test results are used routinely in educational research and practice. Although test validity usually is expressed as a correlation coefficient, this does not indicate the expected accuracy of pass-fail or eligible-not eligible decisions based on test scores. This paper describes expectancy tables for converting a validity or test-retest reliability coefficient, r, into measures of classification accuracy for dichotomous categories. Results for a sample of correlation coefficients and cut-off scores are reported using several indicators of accuracy: sensitivity, efficiency, specificity, hit rate, and kappa. It appears that a validity coefficient above .90 is required to achieve a kappa above .70 and to keep false positive and false negative error rates below 25%. This suggests that many tests and measures considered to have adequate validity (between .60 and .90) often will have limited utility in making diagnostic, placement, or treatment decisions.

This publication has 6 references indexed in Scilit: