One overarching problem facing global scientific research communities today is that too many research projects produce results that don't seem to be reproducible, thereby failing a basic tenet of science. If someone publishes a study saying that alcoholic housecats grossly overestimate their leaping abilities compared to non-alcoholic cats, then one should be able to reproduce the same study and see the same antic results, right? The stakes are far higher, of course, if the results cover the efficacy of a new cancer drug or stem cell treatments, as they often do.
This problem has not gone unnoticed by leadership at the National Institutes of Health.
NIH Director Francis Collins and Principal Deputy Director Lawrence Tabak say in a Nature article that they hear a "growing chorus of concern, from scientists and laypeople" that the system for ensuring that biomedical research results can be reproduced is "failing, and is in need of restructuring."
Collins and Tabak say scientific misconduct is rarely the cause of irreproducibility issues, and there is no one cause, but rather "a complex array" of factors that are involved.
These factors include poor training in experimental design, an emphasis on "making provocative statements" instead of presenting details, a failure on the part of publications to report on the basic design of experiments, and a "secret sauce" that some scientists use to get their results.
It doesn't help that funding agencies, academic centers, and publishers encourage the overvaluation or research published in top-tier journals. And then there is "the problem of what is not published," they write.
"There are few venues for researchers to publish negative data or papers that point out scientific flaws in previously published work," Collins and Tabak say.
They say NIH is "deeply concerned," and is taking several steps to address the problem, though they admit that much more will be required.
Because training may be part of the problem, NIH is putting together a training module on enhancing reproducibility and transparency in research findings, with a focus on sound experimental design. This module will be incorporated into mandatory training for NIH intramural postdoc fellows later this year.
Several NIH institutes and centers also are testing out a checklist to provide reviewers with a system. This list will remind them to check for experimental design features like analytical plans, randomization, and blinding. The agency also has launched a pilot study to assess the value of assigning one reviewer per panel the task of evaluating the scientific premise of an application and the key publications on which the project is based. By the fourth quarter of this year, NIH plans to gather the feedback from those projects and decide if they are worth implementing more broadly.
Data transparency is another area where NIH sees potential for improving reproducibility. The agency has requested applications for a Data Discovery Index that would allow researchers to locate and access unpublished, primary data, and a month ago NIH launched the PubMed Commons forum that enables investigators to share comments on published articles. So far, over 2,000 authors have joined the forum.