Abstract
At present, too many research results in education are blatantly described as significant, when they are in fact trivially small and unimportant. There are several things researchers can do to minimize the importance of statistical significance testing and get articles published without using these tests. First, they can insert statistically in front of significant in research reports. Second, results can be interpreted before p values are reported. Third, effect sizes can be reported along with measures of sampling error. Fourth, replication can be built into the design. The touting of insignificant results as significant because they are statistically significant is not likely to change until researchers break the stranglehold that statistical significance testing has on journal editors.

This publication has 13 references indexed in Scilit: