Bloggers are reacting to an article published in The Chronicle of Higher Education that suggests that an "avalanche" of low-quality papers are being published, that they add too much noise to research and that too many of these papers contain very little useful information. "Instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed," the article's authors say. It takes time and effort for reviewers to read these articles, and wastes the time of researchers who read the articles only to find nothing of any value in them, they add. Limiting the number and length of papers could mitigate this problem, as would making more use of citation and journal impact factors, the authors say, adding, "What we surely need is a change in the academic culture that has given rise to the oversupply of journals."
Derek Lowe at In the Pipeline agrees somewhat with the Chronicle article, but argues that scientific publishers must share some of the blame with academia. They start new journals as quickly as they can launch them to respond to market demand, and many of these new journals turn out to be "losers," he says. As for the fixes the authors of the article suggest, they are fine ideas, but are more or less already in place, but they also don't get to the real problem, he adds. "I suppose what bothers me is the number of people who aren't working up to their potential," Lowe says. "Too many academic groups seem to me to work on problems that are beneath them." The problem, he suggests, isn't so much that people are writing mediocre papers, but rather are being forced to work on mediocre problems because of limits in funding.
Mike the Mad Biologist seems to completely disagree with the suggestions put forth by the "ridiculous" Chronicle article, but concurs with Lowe's opinion that funding is a key problem. Limiting the number of papers only serves to hinder researchers who, for better or worse, must show their worthiness for research funding by showing productiveness, he says. And impact factor is a bad way to judge the quality of an article, simply because some results need to be communicated to only a small audience, which doesn't mean the article is low-quality. However, going by impact factor, these results would be deemed of low publishing priority, going by the Chronicle authors' suggestion, Mike says. "While they are right that there are serious structural problems in science, such as the overproduction of PhDs, this isn't due to publishing, but funding structures," Mike adds.