NEW YORK (GenomeWeb News) – The effect sizes reported for many published biomarkers exceed those found when these associations are scrutinized in larger analyses, according to a review in today's issue of the Journal of the American Medical Association.
A pair of researchers from Stanford University and the University of Ioannina in Greece compared risk associations and effect sizes for dozens of biomarkers that had been assessed in one or more frequently cited studies and in at least one meta-analysis. In more than 80 percent of cases, the researchers saw biomarker effect sizes that were larger in the oft-cited studies than in the meta-analyses, with several of the associations failing to reach statistical significance in the largest studies available.
"These highly cited studies — although there are exceptions — on average, they tend to have exaggerated results," corresponding author John Ioannidis, director of the Stanford University School of Medicine's Stanford Prevention Research Center, told GenomeWeb Daily News.
Ioannidis and co-author Orestis Panagiotou, a clinical and molecular epidemiology researcher at University of Ioannina School of Medicine, looked at biomarkers reported in 35 studies published in two dozen of the most widely cited journals between 1991 and 2006.
The papers represented roughly the top three percent of papers published in these major journals, Ioannidis said. Each had been cited 400 times or more and included associations that were also assessed in at least one meta-analysis.
For their own analyses, the duo compared relative risk information provided by each biomarker in the highly cited study and in the corresponding meta-analysis, looking also at findings for the largest of the relevant studies included in the meta-analysis.
"We tried to see whether the most prominent papers in the literature of biomarkers provide accurate or exaggerated effects," Ioannidis explained. "Having accurate information about these measurements and their impacts is important for both scientific reasons and for clinical practice and public health — and obviously also for cost."
Overall, the researchers found that effect sizes tended to be larger in the highly cited studies than in the largest available study of that biomarker in 86 percent of the cases examined. Similarly, some 83 percent of associations had smaller effect sizes in the meta-analyses than in the highly cited studies.
"It was interesting that there was such a consistent pattern," Ioannidis said. "It probably suggests that this is not something that happens just for a single biomarker. It's probably a phenomenon that applies to many different biomarkers and different diseases and different settings."
Fewer than half of the associations — just 15 of the 35 examined — reached statistical significance in the largest studies available, the pair noted. Of these, seven associations had relative risks of 1.37 or higher in the largest available study, corresponding to an elevated risk of 37 percent or more compared to that found in the population at large.
Meanwhile, when they compared highly cited studies directly to meta-analyses, the researchers found that 32 of the 35 associations reached statistical significance in meta-analyses.
"The results of the highly cited studies were often in stark contrast against both the largest study on the same association and the corresponding meta-analysis," the authors wrote, noting that "[m]eta-analyses of risk factors may have more inflated effects themselves, because typically they include also the highly cited studies and they may suffer from publication and other selective reporting biases."
While they explained that their own study design has limitations and does not necessarily extend to all biomarkers, the researchers argue that their findings could help explain why many biomarkers that appear to be promising never graduate to use in the clinic.
And though they don't dispute the notion that some of the biomarkers included in the study are important and authentic, Ioannidis and Panagiotou say that, in general, rigorous study design and analysis standards, coupled with ongoing analyses of biomarkers, are needed to verify biomarker associations and to accurately chronicle effect sizes.
"In order to get more reliable estimates, we need larger studies and we also need coalitions of investigators working in large consortia," Ioannidis said, "where the primary emphasis would not be to discover one biomarker, but to replicate it and find that it has consistent results … so we can get an appreciation of really how big the estimate of the effect is in reality."
For his part, Ioannidis said he hopes researchers in the biomarker field see the paper "as an invitation to enhance and improve the proportion of biomarkers that make it to become something useful."
In an accompanying editorial appearing in the same issue of JAMA, University of Amsterdam clinical epidemiology, biostatistics, and bioinformatics researcher Patrick Bossuyt argued that while the review should not dissuade researchers from pursuing biomarker research, it should raise caution about over-selling results from individual studies.
"It would be premature to doubt all scientific efforts at marker discovery and unwise to discount all future biomarker evaluation studies," Bossuyt wrote.
"However," he added, "the analysis … should convince clinicians and researchers to be careful to match personal hope with professional skepticism, to apply critical appraisal of study design and close scrutiny of findings where indicated, and to be aware of the findings of well-conducted systematic reviews and meta-analyses when evaluating the evidence on biomarkers."