The number of times a paper is cited is a common way use to gauge that article's impact on its field, writes Andy Tattersall at The Conversation. But it's an imperfect measure.
While Andrew Wakefield's discredited and now retracted Lancet paper has garnered hundreds of citations, just going by the number doesn't reveal that many of the citing works were actually criticizing the paper, Tattersall notes. Other metrics like h-indices try to incorporate both quantity and quality, but don't capture the efforts of new investigators well.
Altmetrics, Tattersall adds, try to encompass all types of academic output, including article views and downloads as well as social media engagement. He notes though that critics say such a measure may encourage "populist" rather than "rigorous" science.
He adds that the post-publication peer review model may have merit, but that it also opens the floodgates for trolling and manipulation, particularly if anonymous comments are allowed.
"Many now believe that long-standing metrics of academic research — peer review, citation-counting, impact factor — are reaching breaking point," Tattersall writes. "But we are not yet in a position to place complete trust in the alternatives — altmetrics, open science, and post-publication review. But what is clear is that in order to measure the value of new measures of value, we need to try them out at scale."