Skip to main content
Premium Trial:

Request an Annual Quote

November 2009: Is Peer Review Broken?

Premium

It is a fundamental irony of science that a field created for and by people who dedicate their lives to eschewing opinion in favor of measurable, objective standards relies as a gauge of success almost entirely on a subjective system: peer review. Scientists' career milestones — from publishing that pivotal paper to winning that crucial grant — hinge on human opinion.

"For something that is of and for scientists, the peer review process is very unscientific," says Ferric Fang, a professor of laboratory medicine and microbiology at the University of Washington. Whether it's for papers or grants, having just a handful of people review someone's work is statistically unsound, he adds. "If these [reviews] were data that you generated in your lab, you would say, 'I don't know what the conclusion of this is.'"

It's no secret that most scientists dislike the current peer review system — for a number of reasons, from how much time it takes to the seeming capriciousness of review decisions. And plenty of people believe the system could be improved. "If you asked someone today to design from scratch a [peer review] system, they would not design it this way," says Jonathan Eisen, a professor at the University of California, Davis, genome center and academic editor-in-chief at PLoS Biology.

But the idea that peer review could somehow be as rigorously scientific as the science it's attempting to judge, while appealing, seems to be terribly unrealistic. "Peer review is necessarily imperfect," according to Hemai Parthasarathy, a vice president with Feinstein Kean Healthcare and former editor at Nature and PLoS Biology. No matter how much the system is tweaked or even overhauled, human opinion will still be a factor.

Still, the shortcomings of the approach, both for scientific literature and for grant funding, are increasingly coming into the spotlight. Tighter budgets have made grant peer review more selective than ever, while the rise of interdisciplinary science defies the concept of one or two expert reviewers that serves as the bedrock for the journal peer review process. "There is a need for a cultural revolution," says SK Dey, a professor of pediatrics, cancer, and cell biology at Cincinnati Children's Hospital Medical Center.

In the pages that follow, we'll look into the peer review processes for grants and for papers, both of which have seen improvements and experimentation in recent years. While it may seem unwieldy to lump them together, concerns about the basic concept of peer review tend to underlie both systems, and it may be informative to consider them jointly.

Grant review

There was nothing quite like the US stimulus fund program to put the grant peer review process in the hot seat. For the National Institutes of Health, timing probably couldn't have been worse: the agency was in the throes of a series of changes to the review program set in motion by Elias Zerhouni before he stepped down from his role as director. In the midst of that, NIH received somewhere in the neighborhood of 30,000 grant applications for its bolus of new funding, which necessitated finding 20,000 extra reviewers and training them in the new process for a review that had to take place in record time. As far as a stress test of the system, you really couldn't ask for more.

At the center of the maelstrom was Toni Scarpa, director of NIH's Center for Scientific Review. Looking back on it all, Scarpa says the system passed with flying colors. Changes to the review process — which included a new scoring scheme, bulleted lists of review points, a different order for discussion, and more — worked "incredibly well," he says. He also notes that finding 20,000 extra reviewers in such a short timeframe "has been probably one of the best examples of responsiveness of the scientific community."

But peer review skeptics say that the agency's ability to find such an extraordinary number of additional reviewers only highlights the problems in the system — one of which is that, as scientists get busier and more overcommitted, "NIH has to dig lower and lower in their reviewer pool," says Washington's Fang. "More and more people are too busy writing grants to review them. … The spiral is really to make peer review of lower quality."

Fang, like other scientists interviewed for this article, says he didn't find the NIH review changes to be very substantive. "Dividing up the peer review process into all of these different categories and expanding the scale is actually only causing a lot of confusion in study section," he says. "It creates a charade of having a different sort of review process, but I don't really think it's any different." Fang is, however, looking forward to NIH's new, shorter application, a much-requested change that will be effective for all grant proposals submitted after January 1.

A general complaint about grant review is the need for more and higher-quality reviewers. "I see the same reviewers going from one study section to another study section," says Cincinnati's Dey, who thinks that having fresh voices would improve the quality of peer review. He's a proponent of using videoconferencing and other electronic means of peer review to reduce the overall time commitment and thereby encourage people to participate. "Then you will be able to include [more] people because they don't have to travel," he says.

Scarpa notes that electronic review now accounts for about 15 percent to 20 percent of the proposals considered. "We found it has become quite popular," he says, adding that NIH isn't "pushing one or the other" but is trying to accommodate the needs of as many potential reviewers as possible with the alternative. He also says that a new system due to come online soon may help the process of finding reviewers: while all applications are currently assigned manually to reviewers, NIH has built a program that can mine a database of millions of grant applications to match new submissions to people who reviewed similar proposals in the past. That should help the institute "do a much better and thorough job in assigning reviewers," Scarpa says.

One hope is that having a larger pool of reviewers could help reduce the impact of any individual review, says Fang. Under the current system, "one bad review can sink an application."

Another take on the grant review system in general is that focus needs to shift away from today's model of specific proposals for short-term periods. "Three years is ridiculously short for a scientific project," says Peter Lawrence at the zoology department of the University of Cambridge. Because of that short time span, "people have to have several overlapping grants in order to function," he says.

Lawrence would prefer a system where reviewers considered the track record of the investigator more than the details of the new research proposal (with special dispensation for new investigators). He says that current applications, where scientists are asked to provide very specific accounts of the work they'll do, are unrealistic — since research will undoubtedly evolve as it goes — and therefore result in detailed but somewhat unlikely accounts. He likes the idea of a process where researchers present work they've accomplished in the past along with a very short description of what they want to do. "The present system isn't working," he says. "When you think about it, it's a huge waste of time and effort."

According to Fang, this concept of awarding funds on a track record basis would also serve the purpose of weeding out people who are very skilled at writing proposals but are less competent at actually performing the science. And a system that acknowledges the necessary evolution of research would be much better than what exists today, he says. "I've had reviewers come back and say, 'You didn't do exactly what you said you were going to do five years ago' in a critical way," Fang adds. Making sure the focus is on getting funds to a really capable scientist would mean that evolving research is no longer a detriment to the system.

In general, Fang believes the grant review process is in need of a complete change, rather than minor adjustments. No company would be successful "if over half [its scientists'] time was spent trying to justify getting funded, and then you would only let one out of every 10 do any work," he says. "Yet the country runs its R&D department that way."

Journal review

Whether you're a veteran scientist with dozens of published papers under your belt or a novice with just one or two publications to your name, chances are that you've experienced the feeling that the peer review process for scientific literature leaves something to be desired. The laundry list of things scientists wish would be improved in the process is quite lengthy and includes items such as more comprehensive reviews, reviewers who are more familiar with the science of the paper at hand, or just wanting a paper to get past an editor and out for review.

From an editor's perspective, the challenges to peer review are the result of a vicious circle: scientists feel more and more pressure to publish; they might split up a project into a few papers to give authors more credit or to have more chances at publication; and they'll likely wind up submitting those papers to more than one journal. "Really, the peer review system is overloaded," says Hemai Parthasarathy, drawing on her experience at Nature and PLoS Biology. With resubmissions, "you could end up with a dozen reviewers that have looked at a paper before it's published," she adds.

With increasing paper submissions and a finite pool of reviewers, the challenge of maintaining high standards for the review process gets tougher. It also means there are fewer people to look at each paper — something that UC Davis' Jonathan Eisen finds problematic. "Having two reviewers or three reviewers … and one editor be the gatekeepers for scientific knowledge is a mistake," he says. "It just seems like it has too much potential for limiting the spread of scientific knowledge."

Peter Lawrence at the University of Cambridge says that problem is magnified at journals with very high rejection rates, such as Nature or Science, where the vast majority of submitted papers are rejected without review. "The people who actually make the vital decisions" — that is, rejecting papers before they get to scientific reviewers — "are editors, who themselves have little experience in research," Lawrence says. "It would be better if those decisions were made by experienced scientists."

To be sure, journal editors do good work, and they're by and large highly regarded in the scientific community. Even without the issue of whether they should serve as gatekeepers of the literature, there are plenty of concerns about the journal review process that focus on the scientific reviewers.

Key among those is the challenge of large-scale biology. Christopher Lee, a professor of chemistry and biochemistry at the University of California, Los Angeles, says that as papers represent more complex research, the chances of finding a truly expert reviewer dwindle. "As soon as you take a reviewer out of the domain of their expertise where they feel basically a comfort zone, the standard of peer review changes from the gold standard we implicitly understand — 'is this a significant improvement over existing literature?' — [to] 'is there anything here that I don't feel comfortable with?'" Lee says. "Discomfort is fatal. … That's enough to stop a paper."

One approach to solving that problem is to separate the steps of evaluating a paper for technical merit and evaluating it for impact. Lee says this could be accomplished in a number of ways, suggesting as an example using Internet-based methods to assess impact — such as posting the paper's headline and checking how many people click on it — and having reviewers responsible for assessing validity of the paper. Another example of this is taking place at PLoS One, where the review process centers entirely around technical merit. A technically sound paper is published, and then it's left to the community to decide whether the paper is actually of interest. That also gets to the root of the problem of having just a couple of reviewers deciding the fate of a paper, according to Eisen, who is also academic editor-in-chief of PLoS Biology. "I think that one or two people assessing the novelty or importance of a paper is very, very hard. But I do think that one or two people are [able to] assess the technical merits of a paper," Eisen says. "Then the community can determine if something is important or interesting or novel." He adds that his lab has "totally bought into" the PLoS One concept and "we're submitting a huge fraction of our papers" to the publication.

Another problem often cited with journal review is that of anonymity. Is it fair that reviewers know who the authors are but remain anonymous themselves? Journal editors believe that the anonymity rule encourages more candid reviews, but some scientists contend that the practice leaves the door open to reviewers acting with political motives. "Anonymous peer review has many benefits, like junior scientists who are more comfortable expressing their opinions," Eisen says. "What comes with anonymity is the potential for malfeasance." While there's no real data on the topic, scientists in the field have no shortage of anecdotal evidence of papers being scuttled by vengeful reviewers or blatantly false comments in reviews.

Eisen, for one, believes that open peer review — where reviewers are identified — would prevent a lot of the problems of the anonymous system. Another possibility is double-blind peer review, where neither the authors nor the reviewers is identified. But there are challenges with both. Parthasarathy says the problem with an open system is that "what happens practically is you can't get enough reviewers to review for you." Particularly for papers where authors are known to be especially influential or vindictive, she adds, reviewers can be hard to come by even under the current anonymity rules — that just gets multiplied if reviewers' names are known, she adds. And the issue with the double-blind system is that "it's remarkably difficult for an author to remain anonymous," Parthasarathy says. If editors are doing their jobs and getting the right reviewers, those scientists have likely already seen the authors present the work at conferences and on posters, she notes.

What everyone can agree on is that trying new approaches with the journal review system can only help. "Experimenting with some of these alternatives is going to give us more options," Eisen says. What he would most like to see is some kind of process where peer review is open enough that scientists "could analyze it the way we analyze papers and really [evaluate] the peer review system."

Themes

There are a number of themes that cut across journal and grant peer review. For instance, the rise of interdisciplinary science makes it less and less likely that any one or two people can have expertise in all aspects of a research proposal or a submitted paper. "Finding the right people for the right kind of grant is becoming very difficult," says Dey at Cincinnati.

UCLA's Lee says this should be an incentive to increase communication in aspects of peer review so that reviewers can ask applicants questions before having to weigh in on a grant or publication. "The problem that you get in traditional peer review is that people are forced to shoot first and ask questions later," he says. The system should "recognize from the beginning that I as a reviewer am not an expert in all aspects" of the paper or grant, he adds.

Another theme that emerges is the preference many scientists are placing on track record rather than any individual submission. In grant review, people like Peter Lawrence believe this would help ensure more accurate allocation of precious research dollars. To some extent, the same holds true in peer review, says Parthasarathy. A paper is more likely to be respected if the author "has a track record in the field of being able to do the experiments," she says. "We want to say that it doesn't matter who did the work, that science should speak for itself. … But at the end of the day, there is a certain level of trust that goes into your reading of science — especially where so much of it depends on the competence of the experimenter."

Of course, the challenges with any system that relies on track record are obvious barriers to new investigators and the possibility that a career could be ruined by a wrong turn in one's research. That's why in the current system, "the majority of the programs are based on projects," says CSR's Scarpa. "Our duty and job is to identify the best research."

Ultimately, according to Fang at Washington, the real problem may be expecting too much from peer review in the first place. "I don't think peer review can ever be more than a general arbiter of quality or poor-quality science," he says. "I don't think that it can ever be so discriminatory that it can take the top 5 or 10 percent" of scientific proposals or papers and separate them from the rest. Fang says that peer review is probably quite good at selecting, say, the top 20 percent of research grants or publications. When reviewers are asked to be much more selective than that, "what they're trying to do is really impossible," he says. Current funding agency paylines and ultra-selective journals put reviewers in the untenable position of trying to guess which proposals or papers are at the top few percent of the stack. "The process has become really distorted," he says.

"The bottom line is, we should be thinking about how to make scientists as successful and productive as they could be," Fang says. "A system where you spend half your time on fundraising is not a smart system."

File Attachments
The Scan

More Boosters for US

Following US Food and Drug Administration authorization, the Centers for Disease Control and Prevention has endorsed booster doses of the Moderna and Johnson & Johnson SARS-CoV-2 vaccines, the Washington Post writes.

From a Pig

A genetically modified pig kidney was transplanted into a human without triggering an immune response, Reuters reports.

For Privacy's Sake

Wired reports that more US states are passing genetic privacy laws.

Science Paper on How Poaching Drove Evolution in African Elephants

In Science this week: poaching has led to the rapid evolution of tuskless African elephants.