Why “Encouraging” Effect Size Reporting Is Not Working: The Etiology of Researcher Resistance to Changing Practices
- 1 March 1999
- journal article
- research article
- Published by Taylor & Francis in The Journal of Psychology
- Vol. 133 (2) , 133-140
- https://doi.org/10.1080/00223989909599728
Abstract
Given decades of lucid, blunt admonitions that statistical significance tests are often misused and that the tests are somewhat limited in their usefulness, what is needed is less repeated bashing of statistical tests and some honest reflection regarding the etiology of researchers' denial and psychological resistance (sometimes unconscious) to an improved practice. Three etiologies are briefly explored here: (a) atavism, (b) “is/ought” logic fallacies, and (c) confusion or desperation. Understanding the etiology of psychological resistance may ultimately lead to improved interventions to assist in overcoming researcher resistance to reporting effect sizes and using non-nil nulls and other analytic improvements.This publication has 31 references indexed in Scilit:
- In praise of brilliance: Where that praise really belongs.American Psychologist, 1998
- Rejoinder: Editorial Policies Regarding Statistical Significance Tests: Further CommentsEducational Researcher, 1997
- Statistical Significance Testing Practices inThe Journal of Experimental EducationThe Journal of Experimental Education, 1997
- The appropriate use of null hypothesis testing.Psychological Methods, 1996
- Research news and Comment: AERA Editorial Policies Regarding Statistical Significance Testing: Three Suggested ReformsEducational Researcher, 1996
- Significance Tests Die HardTheory & Psychology, 1995
- The earth is round (p < .05).American Psychologist, 1994
- Significance Tests are Not EnoughTheory & Psychology, 1991
- An Epistemology of Practical ResearchEducational Researcher, 1979
- The Place of Statistics in PsychologyEducational and Psychological Measurement, 1960