Ecologists are churning out more and more statistically complex studies that somehow are managing to explain less and less, according to a new analysis of over 18,000 articles published over eight decades.
Published by the Ecological Society of America's Frontiers in Ecology and the Environment, the study by McGill University suggested that the trend they found could be due to changes in the way ecologists are tackling their research or how they are reporting their findings.
Nailing down specific causes for the increasing complexity and decreasing marginal explanatory power will probably require a critical review of ecologists' processes, including research their research design and dissemination methods, the authors say.
The idea behind the new analyses stemmed from a lab retreat of McGill grad students, who said they were frustrated that they have been asked consistently to provide more P values when they submit papers, expressing their confidence in whether their results were not due to chance, according to Science's Erik Stokstad.
"Our supervisors said, ‘It wasn’t always like this,'" explains Etienne Low-Décarie, lead author on the study.
To find out more about the perceived trend, the researchers downloaded 18,076 articles dating back to 1913 from three ecology journals, and then used a computer program to search the papers for statistics measuring P values and how much variability can be explained by a given factor, such as how the amount of phosphorous in a lake is an indicator of how much algae will grow there. They found the average number of P values is rising, with a typical paper now reporting 10 P values, double the number from the 1980s, Stokstad notes.
Stokstad suggests that the trend could "undermine confidence in ecological research."
"What’s going on? One possibility is that ecologists have picked the low-hanging fruit…," he writes, but it also is possible that standards may be getting lower because ecologists face more pressure to publish.