Skip to main content
Premium Trial:

Request an Annual Quote

Not So Much With the Agreeing

Peer reviewers rarely agree in their ratings of research proposals, according to a new study appearing this week in the Proceedings of the National Academy of Sciences.

Researchers from the University of Wisconsin–Madison examined how 43 reviewers rated and critiqued the same group of 25 grant applications that had submitted to the US National Institutes of Health in a simulation of the agency's peer-review process. They report that though the reviewers in their simulation all received the same instructions on how to evaluate the applications, there was no agreement among the reviewers as to the proposals' quality or in how reviewers translated their assessments into numerical scores.

This indicated to the researchers led by Madison's Molly Carnes that peer reviewers can't distinguish between grant proposals that are merely good and those that are great. She and her colleagues note that the outcome of the review depended more on which reviewers were assigned to the grant rather than the grant itself.

In a statement, Carnes says that this work examines the question of: "How can we improve the way that grants are reviewed so there is less subjectivity in the ultimate funding of science?"

"We need more research in this area and the NIH is investing money investigating this process," she adds.