Skip to main content
Premium Trial:

Request an Annual Quote

… And Statistics

In an editorial appearing in the American Statistician, statisticians warn that p-values are being misused and misinterpreted.

To help clarify how p-values should be used, the American Statistical Association shares a number of guiding principles. First off, the group writes that p-values indicate whether the data is compatible with a certain statistical model. In addition, they don't indicate how likely a hypothesis is to be true.

Retraction Watch's Alison McCook asks Ron Wasserstein, ASA's executive director, about that last point. She notes that p-values are often used "to estimate the probability the data were produced by random chance alone."

But that, Wasserstein tells her, is not what they do. He offers a hypothetical test of two treatments in five matched pairs of patients as an example. In this case, the null hypothesis would be that the new and the old treatment each have a 50-50 chance of leading to a better outcome in each pair of patients.

"If that's true, the probability the new treatment will win for all five pairs is (½)5 = 1/32, or about 0.03. If the data show that the new treatment does produce a better outcome for all 5 pairs, the p-value is 0.03," he tells her. "It represents the probability of that result, under the assumption that the new and old treatments are equally likely to win. It is not the probability the new treatment and the old treatment are equally likely to win."

The ASA principles also say that scientific conclusions shouldn't be based solely on whether a p-value meets a certain threshold, such as p<0.05. Instead, it says that study design, the quality of the measurements, and the assumptions underlying the analysis should be considered.

"No single index should substitute for scientific reasoning," the ASA adds.