Skip to main content
Premium Trial:

Request an Annual Quote

A Positive Bias

Premium

Publish or perish: it's a common dilemma in the scientific field. Those who publish receive grant money, the admiration of their peers, and the satisfaction of knowing that their work will be read and remembered. Those who don't publish sooner or later fade into scientific obscurity, without grants or recognition.

It isn't enough just to publish. The work must show something, advance the field in some way, or at least be interesting. Studies that don't prove anything are looked over, sometimes ignored, and almost always forgotten. The pressure is on to publish, and to publish well — something that often means publishing positive results that prove or support a hypothesis. But are positive results all they are cracked up to be?

The University of Edinburgh's Daniele Fanelli set out to answer that question. He recently conducted a study, published in PLoS One, to determine whether researchers were presenting their results in a biased manner in order to make them seem positive, and therefore more worthy of publication. "In many fields of science, it's been observed that you generally get too many positive results and too few negative results, which is due to a variety of reasons," Fanelli says. "My research is in part about identifying some of the factors that add to this situation."

Fanelli sampled more than 1,300 published papers at random throughout all scientific disciplines and throughout the United States, to see how many of those reported positive versus negative results. He found that along the spectrum of science, as he moved from the chemical and physical studies to the biological sciences to behavioral studies on animals and humans, the number of papers reporting positive results increased. He also looked at the influence that productivity could have on the chances of a researcher reporting a positive or negative result by calculating the frequency of positive results across the country and then comparing that data to the National Science Foundation's data on the productivity of academics in the US. A statistical link emerged, he says. "What you get is that in states in which researchers publish more academic papers with less research funding, they report more positive results," he said. "It would be the kind of thing you'd expect to see if — as people have been concerned for a long time — productivity in research and scientific quality don't always go together."

Bias emerges

The more researchers deal with complex phenomena and use new methodologies with increased number of variables, the more a researcher's bias — whether conscious or unconscious — takes over, Fanelli says. "If you think about the scientific method, this is largely a system that, over the years, we have built to try and control for all the potential biases that even the most honest and clever researcher has," he adds. "From the very start, data is not guaranteed to be objective. But even more than the data, the way you analyze it, the way you decide which are good results, which are bad results that you shouldn't publish, the way you might select the data or the results, the way you write the article and present your results — all these factors have the potential to influence the outcome that finally appears in the literature."

The irony of this is that negative results aren't inherently bad. They teach us something, Fanelli says, and often are crucial for the advancement of science. So why is there such an inherent bias toward positive results? At the root of it all, Fanelli says, is the fundamental bias human beings have toward positive news. "It's well proven in psychology that all human beings are confirmation-biased. We all have a tendency to look around in the world and see in what happens around us confirmations of what we think is true," he says.

The same idea extends to science and scientists. "In reality, scientists are driven by very clear intuitions or ideas about what they think reality is. And in their research, the truth is, they will come to see confirmations of that reality," Fanelli says. "Then you have a whole system that might actually reinforce this, starting with the fact that other researchers in your field are more likely to read and cite your paper if it reports positive results."

The culture of science

People are more interested in reading a paper that explains a hypothesis or theory, Fanelli says, even though science often progresses when researchers find something they cannot explain. The culture of scientific publishing reinforces this idea, emphasizing that papers worthy of publication are the ones that prove something. "You have many editors who openly say they do that," Fanelli adds. And herein lies the dilemma most researchers face: they are operating within a system that places great emphasis on the number of papers they publish, or the number of citations each paper gets, and also gives priority to those papers that are reporting something positive. "You start having an actual conflict of interest in your profession between the need to be purely objective and detached from the results you have and the need to sell them and to sell yourself to get along with the publications and to get another job," Fanelli says.

It's not that productivity isn't important, however. Research is essentially useless if it's not published and shared with the wider scientific community, especially colleagues in a researcher's given field. The problem is simply the emphasis that is placed on the results. "You're increasingly playing a publication game," Fanelli says. "Scientists accused of falsifying results typically say they were desperate to keep going and that was a factor that increased the temptation to cheat." It's the system itself that needs to change, he suggests.

As research becomes more and more specialized, it's becoming harder and harder to find experts in any given field, Fanelli says. To some extent, it's inevitable that the community will rely more on performance evaluators of some kind, like how many papers a researcher has published and what kind of impact said research has made. "It's a matter of efficiency and necessity, but there are obvious concerns that you can't build a fair and reliable system of evaluation based only on these kinds of parameters," Fanelli says.

Changing the system

But changing that system won't be easy either. There have been attempts made to create journals specifically for negative results, Fanelli says, but they didn't have much success — most of them have folded. "If we were able to give more importance to negative results than we actually do, that would be an obvious step forward," he adds. One problem is that not all negative results should be published, and it generally makes sense that positive results are given greater attention. At the same time, there are a lot of negative results that should be published and aren't, Fanelli says. It's not just a matter of the results getting published and ignored — it's a matter of them being shunted aside when they could help improve science in some way.

Another way forward, Fanelli says, is to follow the example of companies that do biomedical research and register their studies before they begin them. That way, there would be more openness about the kinds of research being conducted, and the results would have to be made public — no matter whether they were positive or negative. "That could be another way to place greater emphasis on the fact that people are doing work and less on the types of results they get," Fanelli says.

What is troubling is that it's been suggested that researchers are deliberately compromising their ethics for the sake of getting published, Fanelli says. But even unconscious bias can have a deleterious effect on science. "[Some] researchers don't cheat, but do stuff like drop certain results or not publish some numbers that contradict previous data," he adds. "Some of these things could be considered appropriate and others not. In many cases, they could be doing this in good faith, but are generally fooling themselves."

The Scan

US Booster Eligibility Decision

The US CDC director recommends that people at high risk of developing COVID-19 due to their jobs also be eligible for COVID-19 boosters, in addition to those 65 years old and older or with underlying medical conditions.

Arizona Bill Before Judge

The Arizona Daily Star reports that a judge is weighing whether a new Arizona law restricting abortion due to genetic conditions is a ban or a restriction.

Additional Genes

Wales is rolling out new genetic testing service for cancer patients, according to BBC News.

Science Papers Examine State of Human Genomic Research, Single-Cell Protein Quantification

In Science this week: a number of editorials and policy reports discuss advances in human genomic research, and more.