Research findings that turn out to not be replicable are more likely to be cited than reproducible work, possibly because findings are splashier, the Guardian reports.
Concerns about reproducibility have been around for years. Researchers at Amgen announced they were unable to reproduce the findings of 47 of the 53 landmark cancer papers they tried to replicate, and published data on three attempts in 2016. The year before, the Open Science Collaboration reported that it could only reproduce about half the 100 major psychology studies it attempted to replicate.
In a new analysis appearing in Science Advances, a University of California, San Diego-led team examined the effort by the Open Science Collaboration as well as two other replication efforts in the social sciences to look whether those failed replication attempts had any influence on papers' citations. They found that papers whose results could not be reproduced were more likely to be cited — garnering about 153 more citations in the study period than others — and that most of the subsequent citations did not mention the failed replication.
"We presume that science is self-correcting," the University of Virginia's Brian Nosek, who runs the Open Science Collaboration, tells the Guardian, adding that "if more replicable findings are less likely to be cited, it could suggest that science isn’t just failing to self-correct; it might be going in the wrong direction."