Skip to main content
Premium Trial:

Request an Annual Quote

TRC s Weis Touts New Methods for Standardizing Expression Analyses

Premium
Brenda Weis
Research Coordinator
Toxicogenomics Research Consortium

At a Glance

Name: Brenda Weis

Professional Experience: Research Coordinator, Toxicogenomics Research Consortium, National Institute of Environmental Health Science; Deputy Director to the Associate Administrator for Science, Agency for Toxic Substances and Disease Registry, Atlanta; Researcher, US Environmental Protection Agency and Centers for Disease Control.

Education: Tufts University, BS, environmental biology and MSPH in public health/epidemiology.

University of Georgia, College of Veterinary Medicine, PhD, molecular toxicology.


For what has seemed like the entire lifetime of microarray-related research, the user community has been talking about 'standardization,' or the ability of a researcher in Lab A to run an experiment on an array and achieve results that can be replicated by a different researcher at Lab B. While there have been many subtle achievements in moving towards a microarray space where data can be compared across lab and platform, one of the more promising steps towards standardization came this month with the release of a study by the Toxicogenomics Research Consortium, a large collaborative research program comprising of six universities and the Microarray Group of the National Institute of Environmental Health Science's National Center for Toxicogenomics.

Entitled 'Standardizing global gene-expression analysis between laboratories and across platforms,' and published in this month's issue of Nature Methods, the study highlights protocol developed by seven laboratories running standard mouse RNA on 12 selected platforms that increases the comparability of data from lab to lab, platform to platform.

According to TRC research coordinator Brenda Weis, the key to getting standard, comparable, US Food and Drug Administration-friendly results lies not only in the quality of the platform but on the resourcefulness of the researcher. To learn more about the study and its potential impact on microarray users, BioArray News interviewed Weis last week.

 

Is this the first major paper to come out of the TRC with regards to microarray standardization?

With regards to standardization it is actually the second. The first paper only addressed a very small subset of our overall dataset, which this paper didn't cover at all. It was by Qin, Kerr, and members of the Toxicogenomics Research Consortium. It came out in Nucleic Acid Research. And it was really a statistical analysis of a very small subset of our data. So that did come out, about six months ago. But as far as this [specific] collaboration, this is our first paper.

You mentioned in your paper that there is a great diversity in protocol that are being used by different labs for RNA preparation labeling. At this stage in the game the technology isn't that new anymore. Why does one lab not know what the other lab is doing, in your opinion, with regards to RNA handling and preparation?

I think there are two reasons. First of all, this field really started exploding in the last five years, and individual labs are manufacturing their own microarrays. Some are purchasing them, but most are spotting their own, and there hasn't been a definitive study that has come out in the scientific literature emphasizing the need to have standards and these aspects of labeling and hybridization and things. So folks are manufacturing their own arrays and they are using their own in-house procedures because I don't think anyone really knew how these factors would impact the overall data quality across labs. There just wasn't anything out there to point the finger and say, 'this is really important.' I think intuitively we knew that using different protocols would produce different results. I don't think anyone knew how provocative the findings would be and I think now [we agree that] if we want to design consortia or bring this to the clinic, we are certainly going to have to address the issue of standards.

This issue comes up at a lot of the conferences we attend.

Yeah, and think about the regulatory side. I do a lot of work with the US Food and Drug Administration and the Environmental Protection Agency. We are institute hosts [for the] National Academy of Science on toxicogenomics and we have a federal forum with those folks. They need standards more than anyone because they're getting drug package submissions and tox-testing results from pharma and they need to know how to benchmark that data.

They have been very interested in hearing how much variability there is, how to control it, and how to evaluate the data they get in. So it has been really important for them as well.

Maybe you can explain the fundamental methodology here in regards to why you selected twelve microarray platforms and seven different labs. Why did you arrive at this kind of formula?

What we did is we put out a grant solicitation asking for the experts in the field to come to us because we wanted to launch a really big study very rigorously conducted with the best and the brightest in the field. The seven labs that we have came forward and they competed successfully and joined this program, knowing that they were going to be working together.

When they came in, what we did first of all was to look across the different programs and say, 'What platforms are you using?' And as it turns out they were all using a variety of different platforms. Some were buying commercial arrays, but all of them were manufacturing them in house as well. So we decided that we would come up with two standard reference RNA samples and we would give them to these labs, because this is a realistic experiment. Data is published in the literature and people are using their own in-house arrays. We wanted to see how that worked, because we had a representative sample of the scientific community and so we let them run the experiments in the ways in which they were most comfortable. The twelve platforms arose out of their capabilities, basically. Then we said, 'Ooh, the correlation wasn't that great when we didn't control anything and we let them use their in-house platforms.' Within an individual lab, the data looked great, really consistent. But when we looked lab to lab, when on the same RNA sample, it didn't look very consistent — it was like 30 percent of correlation, which is not acceptable from a scientific knowledge standpoint, or certainly if you are going to translate this knowledge into any kind of clinical test. So we knew there were a lot of sources of variability and we decided to try to tease them out.

So we decided to make two sorts of 'standard platforms.' One of these was an oligo platform that one of our centers gave to us that we gave to all the other labs. Our second standard platform is one we designed with Icoria and Agilent as our commercial array. We then designed this array, provided that to all the investigators together with RNA samples, and we said, 'Now run these two but do it systematically. Control for labeling and hybridization the first time you run it. The second time you run it control for the scanning of the images, and the third time control for everything.

Then we did a comparative analysis about how the data looked and that's really what we are presenting in our paper. Now we have a very good handle on what individual components of the experiment contribute most, how they contribute, how to control for them, and if they are really important.

Why did you choose to use mouse RNA? It's pretty standard, but for whatever reason why did you go with mouse?

That's a great question because nowadays there's more standard reference RNA to purchase, but when we started in 2001 there really wasn't much to work with. So we looked across the capabilities of the investigators we had and we found that everybody was used to dealing with mouse. So it was a species that was important for their research that they were comfortable working with and also the mouse is 99 percent homologous to humans so we felt that was a really good model to use as well.

Who made the final decisions in the consortium?

We decided on everything as a consensus. All the investigators has input into the study design because we needed buy-in, and we needed everyone to feel comfortable that they could run the experiment once we decided what it would be. It took probably as long to plan it as it did to do it.

From your study how much did different optical imaging and software packages used by labs factor into the variability of the results?

It's a big deal. And we actually did a comparative study on those standard arrays using different analysis software packages and found pretty good variability — we didn't publish that. But the main aspect of the software is not actually the software itself, believe it or not, it's the feature-extraction parameters of the software — in other words, how does it decide what a spot looks like, the dimensions of the spot; the intensity of the spot. And so it's how you use the software that's most important, not the software itself. You can train any software to perform well for you. But using just the default parameters from off-the-shelf software and just throwing it in the mix, you're not going to achieve consistent results. We did find consistency on any individual software as long as we standardized the parameters for extracting the images.

Any software will perform well for you. The take-home message is that it is really how you adjust and control for the feature extraction across different microarrays.

From what I understand then, the data variability issues are not due to the technology, but from the researchers not using the technology to their advantage.

Well, I think it is better said like this. I think that if you look at our sources of variability, and I think this holds true across other papers that have been published, is that the array type matters the most. It has a lot to do with the manufacturing and quality control that goes into the development of those arrays. Even with commercially developed arrays there is some level of variability because, even though they are automated and quality controlled, there is some level of variability. However, that variability goes up exponentially when you get into most small academic research labs.

If you are a big academic lab — you're a powerhouse like John Quackenbush's lab or Pat Brown's Lab — and you've got the same level of controls that a commercial development lab would have, you are probably going to get pretty consistent results. But there aren't many labs across the country that can achieve that level of consistency with their in-house labs. It's really the commercial platforms and the way they are manufactured and all the optimization that goes into experiments — that's really the biggest source of variability, the actual platform itself.

So it's the homebrews that actually have the highest level of variability.

Yes, and that's just because things aren't quality controlled and engineered at the level you would if you were going to take a product to market. The incentive for commercial vendors to have high-quality products is there because they have a market to keep. So it makes sense.

But do you think it would then make sense for labs to maybe reconsider spotting their own if they are going to bring their findings to the intellectual marketplace?

I think the answer lies in what you want to do with the data at the end of the day. If you are a researcher and you want to understand a disease process and you are going to use microarrays to do that, because you are using it to develop knowledge, you may say 'I can live with a certain level of variability.'

If you want to use that data to submit a drug package to FDA, you may say 'I can't live with any level of variability that's below 90 percent.' It depends on what you want to use the data for.

I think a lot of academic labs are now turning to commercial microarrays because, even though they are more expensive, the results are so reliable that they actually have to run fewer experiments. So maybe the cost is equal.

What is it going to take for those who read your paper to take your methods back to the lab and implement your recommendations?

I think this paper is a first step. It's a starting gun saying 'hey guys, this is what's important.' Now, whether they feel that their own lab needs to implement these standards is obviously going to be their personal choice. I will tell you with a fair amount of certainty that if this technology is going to be brought into the regulatory or clinical arena, it's paramount that we have standards, absolutely paramount. And every forum we go to — in every one of those communities, they are crying to know what to do, how to do it, and what is most important. The application is clearly there in the clinical, drug development, pharma arena. Researchers, however, still may decide that the quality of data they are currently generating is adequate for their needs. That is going to be their choice.

What is the next step for your group?

We've just finished collecting all of our data looking at variability and biological response. Had we run this experiment before we did this study across two labs and found inconsistent results, we wouldn't have known whether the difference between Lab A and B was due to experimental problems, technical variability, or due to the underlying biological variability. Now, we followed up this study by controlling everything on the technical side and now we are looking at biological response to specific environmental stressors across labs and across model systems. We have got mouse, rat, yeast, and we are looking for concerned biological responses across these organisms. This is clearly relevant to the development of drugs and to the regulatory risk assessment processes that the FDA hangs its hat on, basically. So that's our follow up study, we are in the process now of trying to analyze the data. We used acetaminophen as our model compound, although the study could be compared to any compound.

File Attachments
The Scan

Fertility Fraud Found

Consumer genetic testing has uncovered cases of fertility fraud that are leading to lawsuits, according to USA Today.

Ties Between Vigorous Exercise, ALS in Genetically At-Risk People

Regular strenuous exercise could contribute to motor neuron disease development among those already at genetic risk, Sky News reports.

Test Warning

The Guardian writes that the US regulators have warned against using a rapid COVID-19 test that is a key part of mass testing in the UK.

Science Papers Examine Feedback Mechanism Affecting Xist, Continuous Health Monitoring for Precision Medicine

In Science this week: analysis of cis confinement of the X-inactive specific transcript, and more.