Many researchers bemoan the use of journal impact factors as a means of assessing the influence of scientific articles, Nature's Mollie Bloudoff-Indelicato writes. In response to this, the US National Institutes of Health has developed a new metric, dubbed the Relative Citation Ratio, but this approach, too, has drawn criticism, Bloudoff-Indelicato adds.
In a paper posted at bioRxiv, an NIH team led by George Santangelo describes the RCR as an "article-level and field-independent" way to quantify scientific accomplishment. An article's RCR is calculated by dividing its citation rate by the average citation rate of articles in the field. The RCR is then compared to a benchmark set of NIH-funded papers.
The team applied the metric to nearly 89,000 articles published between 2003 and 2010, and found that the values they generated tracked with what subject matter experts thought.
According to Nature, Stefano Bertuzzi from the American Society for Cell Biology calls the new metric "stunning" in a blog post, but Ludo Waltman from Leiden University says in his own post that it "doesn't live up to expectations." Further, he says that its complexity and lack of transparency will like be an impediment to its wider adoption.
"We don't suggest [the RCR] is the final answer to measuring," Santangelo adds. "This is just a tool. No one metric is ever going to be used in isolation by the NIH."