Abstract
In statistical testing of data, the p value is a standard measure for reporting quantitative results. When a significant difference is reported, (e.g., P < .05), most readers understand that there is less than a 5% chance that the authors have made a type I error (false positive or a) with their conclusion. In contrast, when nonsignificant differences between treatments, groups, or parameters of interest are reported (e.g., P > .05), many investigators and readers incorrectly interpret the 95% confidence interval for this conclusion as a 95% chance of making the correct decision. In fact, the α level of significance (in this example, .05) is only one of the parameters that determines the probability of committing a type II error (false negative or β) when concluding statistical insignificance. Statistical power is the probability of having made a correct decision when the statistical tests reveal insignificance (P > .05) and the null hypothesis is true. The higher the power, the greater the chance that the decision is correct. Power depends on the α level of significance, the sample size, the standard deviation of the population or the sample, and the magnitude of the difference the investigators are trying to demonstrate.