Abstract
To provide counterexamples to some commonly held generalizations about the benefits of nonparametric tests, the author concurrently violated in a simulation study 2 assumptions of parametric statistical significance tests—normality and homogeneity of variance. For various combinations of nonnormal distribution shapes and degrees of variance heterogeneity, the Type I error probability of a non-parametric rank test, the Wilcoxon-Mann-Whitney test, was found to be biased to a far greater extent than that of its parametric counterpart, the Student t test. The Welch-Satterthwaite separate-variances version of the t test, together with a preliminary outlier detection and downweighting procedure, protected the statistical significance level more consistently than the nonparametric test did. Those findings reveal that nonparametric methods are not always acceptable substitutes for parametric methods such as the t test and the F test in research studies when parametric assumptions are not satisfied. They also indicate that multiple violations of assumptions can produce anomalous effects not observed in separate violations.

This publication has 14 references indexed in Scilit: