As PCR approaches its third decade of existence, countless research labs around the world perform tens of thousands of these experiments each year and generate more than $100 million in annual royalty revenues for Roche, which has held the patent estate to the technology since 1991.
Yet with the increasing reliance on PCR, and especially fluorescence-based real-time PCR for quantitative gene expression, most scientists complain that a dearth of standardization protocols together with a good dose of human error have conspired to undermine the accuracy of these experiments.
“A lot of the results published [that use RT-PCR for quantitative gene expression] are meaningless,” said Stephen Bustin, a researcher in molecular medicine at Barts and the London Queen Mary’s School of Medicine and Dentistry. “There is no standardization. This is a problem.”
However, absent a solution, this problem may be a boon for independent high-throughput genotyping-service labs: Researchers, fed up with inconsistent and untrustworthy results, might eventually outsource their PCR experiments to large academic or private-sector providers, whose standardized methodologies offer greater consistency and better cost-efficiency.
To be sure, researchers in many life-science disciplines inadvertently contribute to the problem by claiming their methods are the gold standard. But the phenomenon — and its eventual impact on human health — is magnified in genomics, and especially in gene-expression profiling and SNP-genotyping, because of the proximity to clinical outcomes and the race for greater throughput.
“There’s no real benchmarking or objective evaluation of the pros and cons of the methods,” said Tony Brookes, vice chairman of the clinical genomics unit at the Karolinska Institute’s Center for Genomics and Bioinformatics, in Sweden.
For example, despite the volume of RT-PCR experiments performed each year, few researchers actually control the assays from the start, said Bustin. “You have to control how you prepare it, store it, how you handle your RNA, and how you quality-control it,” he told SNPtech Reporter. “What you find is that most people pay very little attention to that.”
In fact, results of a recent study Bustin performed showed that roughly 30 percent of researchers conducting RT-PCR experiments have quality-control guidelines that focus on RNA-handling — a problem that becomes significant when one considers the diversity of RNA molecules among various organisms. For example, some RNA molecules found in the liver have half lives measured in seconds. One researcher might be able in five seconds to sacrifice the animal, locate and extract its liver, collect some tissue, and freeze it. The same procedure might take other researchers two minutes.
“Obviously, if your RNA is no good then your results will be no good,” Bustin said. “I think any experiment that records expression profiling … needs to have some indication as to what the quality of the RNA was.”
Experts say this issue has surfaced because scientists today have begun focusing on quantitative-data analysis as opposed to using PCR merely to prove the existence of a single mutation. “We’re not satisfied with that anymore,” argued Bustin. “And because you don’t allow for quality control at the very beginning — because there is so much variability in how you carry out your experiment — it becomes very difficult to come to a universal conclusion” about various data endpoints. “There are huge high-positive rates, false-positive rates, false-negative rates, and huge error rates.
“There’s so many enzymes, so many different reverse conscriptases, so many polymerases,” Bustin went on. “Some people optimize their reactions; some people don’t optimize their reactions; people use different chemistries, so the way people get their results are quite markedly different between different people doing these experiments.”
Furthermore, many researchers using RT-PCR don’t want to do standard curves, and instead “want to do normalization against standard curves and are trying to use relative quantifications” against housekeeping genes. “To my mind this is a no-no,” said Bustin. “You can’t rely on housekeeping genes to normalize your data, so you have to find something else. And this is where the problem arises: there’s nothing that you can put your finger on and say, ‘Well, this is the best possible way of doing it.’ Different people propose different things.”
Said Karolinska’s Brookes: “We’re dealing with really difficult technologies here where you’re trying to determine in some cases the difference between one and two molecules, or 10 and 20 molecules. It’s going to be impossible to have a method that’s perfect at that.”
He added: “I know that some of the SNP-genotyping methods out there have, in some experiments, error rates of 4 or 5 percent. And other times those same methods can be 99.9 percent accurate,” he said. “It depends on the SNP, it depends on the worker, it depends on the quality of the input DNA. We have experience of working with others where we’ve seen that error rate, and we’ve done experiments to prove that they were wrong in that many cases.” Brookes stressed that these kind of errors are often missed.
The research community has cautiously begun to address this problem. “Make sure your RNA or template quality is good,” advised Bustin. Also, “take the human element out of the equation and automate as much as you can,” he said. “I think that people have begun to wake up to it, but they are wary, and I think there is a lot of reluctance and resistance to changing your ways. There’s a lot at stake, obviously. But I think it’s getting better.”
Labs may also create standard procedures to compare research internally. Other labs can agree to institute a program that compares their work with other labs. Brookes’ facility in Sweden is one example of the latter: He has instructed his staff to duplicate 5 percent of the plates that arrive at the lab and send them off to an outside lab, which will perform the same experiments. These steps are mirrored at the other lab. “It adds 5 percent to our workload, but if it’s all done at a random way I’ll soon discover where I’m going wrong or where he’s going wrong,” said Brookes. In fact, he said, this is “exactly analogous” to the way in which the human genome was sequenced, in which the myriad sequencing labs of the Human Genome Project repeated some of each other’s work to ensure they achieved the same result.
Barring these steps, some scientists suggest that the community may begin to outsource genotyping and gene-expression experiments to the big service shops. Researchers like Brookes welcome this.
“I think these things will come to happen, and I think it makes more sense that SNP genotyping is done by properly set-up, professional, commercial groups who have all of the standard operating procedures and quality-validation controls in place. This actually makes more sense, and it’s actually cheaper for most academic users to outsource their genotyping to these groups. This would then bring a lot of standardization in.”
According to Stephen Chanock, director of the core genotyping facility at the National Cancer Institute’s Center for Cancer Research, the “standardization question will take quite a while” to answer. He also said he doubts that a “true standard” will ever emerge.
Asked if he thought the majority of researchers might agree on one standard, Chanock said: “That would be wonderful, but the likelihood of people agreeing on that is … a couple of years off.”