Skip to main content
Premium Trial:

Request an Annual Quote

NIDDK s Margaret Cam on Microarray Cross-Platform Comparisons

Premium

At A Glance

Margaret Cam, Director, Microarray Core Facility NIDDK, NIH

Education

1988 — BSc, Pharmacology, University of British Columbia

1996 — PhD, Pharmacology, University of British Columbia

Post-doctoral — McDonald Research Laboratory, St. Paul’s Hospital, Vancouver, BC, Canada; NIDDK, National Institutes of Health.

Earlier this spring, Margaret Cam and a number of micro-array researchers attending the Macroresults from Microarrays Conference in Boston gathered in a lecture room and engaged in a robust conversation about cross-platform comparisions between microarray platforms.

Cam, the director of the microarray core facility at the National Institute of Diabetes and Digestive and Kidney Disorders, told BioArray News then that industry should make stronger efforts to standardize between its microarray platforms to help investigators make sense out of the data.

Cam still feels that way. And now, she can base her opinion on the results of a one-year research project, which will be published in the October edition of Nucleic Acids Research as “Evaluation of gene expression measurements from commercial microarray platforms” (Paul K. Tan, Thomas J. Downey, Edward L. Spitznagel Jr., Pin Xu, Dadin Fu, Dimiter S. Dimitrov, Richard A. Lempicki, Bruce M. Raaka and Margaret C. Cam. Nucleic Acids Research, 2003, Vol. 31, No. 19 5676-5684).

She spoke with BioArray News to explain the publication and the research project, which attempted to analyze identical RNA samples on each of the three industrially produced microarray platforms — Affymetrix, Agilent Technologies, and Amersham’s CodeLink.

What spurred this research?

We did an evaluation to see which platform would serve the best purpose for our users. We compared CodeLink, Affymetrix, and Agilent’s cDNA arrays [their long oligo arrays were not available at the time].

What types of comparisons of this type have you seen in the literature?

There have been a few other studies. We are not the first to report comparisons across different platforms. Two of them have shown differences. One has shown similarity. We have referenced them in the paper.

How did you conduct the comparisons?

Using the same source of RNA, we compared the three different platforms. We used each platform’s standard protocols and, in fact, for two of the platforms, Agilent and Amersham, we were hands-off. They sent applications scientists to run the arrays for us, to do the labeling, and provide us with the data. All the assays were done here at NIH. For Affymetrix, we used a core facility for NIAID in Frederick, which is well-experienced with the system, having used it for three or four years. We did that because we wanted to use experienced technical hands to run this experiment, and not introduce any artifacts due to human error. We are convinced that the data produced from the platforms are the best that we can produce.

And, what did you learn?

Our conclusions are very straightforward: There was very little overlap in the types of data in terms of differential gene expression.

We are still trying to see if different probe designs might have contribute to that. Probes are designed based on sequences extracted from mRNA and EST databases. Depending on which part of the sequence each company selected to base their probe designs on, the [microarrays] may be detecting alternately spliced transcript variants. We are highly interested in pursuing that.

The Venn diagrams, well, they are better than random. That much we can say. If somebody says ‘Throw it in the air,’ it’s better than that. But not much better. There was between 25 and 35 percent overlap. We were expecting close to 100 percent overlap. That’s why we were so confused with what actually came out of this study. It didn’t make sense to us initially. Finally we decided to deal with it, formalize the results, and write it up as a paper.

The results were surprising, to say the least.

We did run a two-way [analysis of variance], looking at the biological and technical replicates we ran in each of the platforms. We found that where one of the platforms would call a gene differentially expressed, the likelihood that the other two would was very low. You couldn’t expect the same answers every time you looked at a different platform.

For a while there, it was upsetting. We were at a loss for how to interpret the data. The data was to have been used for a scientifically based study. The group that provided the RNA wanted to get some answers and write a paper based on results. Because of the disparate results we got back, it was hard to put things together. We decided that we would stick to one platform for a while to simplify things, and to start validating from the one platform.

And, that platform was?

Initially, we selected Affymetrix because we had run more time points with that platform — more than the other two.

As far as selecting a platform to support services for in our lab, Affy won by default, primarily because almost everybody uses them. It’s hard to run a core lab without using them. We also made a decision on an alternative platform so that people can have a choice. But, we couldn’t use accuracy as the basis of selection. We don’t know which is the more accurate. We will do RT-PCR to see which is more accurate. [To do that], we have to select good candidate genes. We will run that within the next month.

What were your thoughts as you worked through this process, and the consideration of publishing it?

Early on, we had some internal decisions to make as to whether or not to publish the data. We decided to publish it to abide by a desire to share information like this, which might be negative, or might be informative for the microarray community at large. We grappled a lot with what impact this type of data would have on microarray users in general. We came to the decision that readers would read what they read, look at our data and make up their minds. We were not trying to point a finger at anybody in particular in the industry. What we are saying is that there might be a need for more standardization.

What problems need fixing?

Some of the manufacturers don’t provide their sequence information readily. When we don’t have access to that, it points to a bigger problem, and that is: What are we comparing?

Who didn’t provide the information you need?

Amersham will only release limited sequence data. We tried to get all the sequence data from them but were not successful. But, we had enough to do our analysis. Agilent doesn’t allow full access to its cDNA sequence information and only allowed [us] partial sequence information for their cDNA probes. There are still some customers that have used, and are still using their [Agilent] cDNA platforms. For historical reasons, we need to compare all these different data across the board.

Do you think you were rigorous and thorough in conducting this research, and is that the standard that researchers should follow in conducting microarray analysis?

Nowadays, people do hold microarray data at a higher standard. You can’t publish results without doing some level of validation, RT-PCR or Northerns. I think as far as medical data is concerned, it’s hard to say. It is still early stage. You can see papers out there that make the claim that they can separate patients based on gene expression profiles. I don’t doubt that they can. I think we are still at the stage where we should be collecting more data until we have validated, from a purist standpoint, at least a majority of genes that will classify patients or create a diagnostic profile for those patients.

Everybody is in the same boat: people want to publish, people want to make the big discoveries. Microarrays hold the promise of allowing people to do that. At this point, I don’t see it happening. There is still hope. Microarrays are not dead, but more optimization, and more understanding of the technology needs to happen. There need to be more studies like this. We need to encourage people to look into their own data more closely and see if they can make the comparisons themselves. Unless more and more people do studies like this, we won’t know what the problem is with a technology, which has to gain full acceptance before it can be used for more crucial areas, like more clinical and therapeutic uses.

What do you hope are the lasting effects of this study?

I’m hoping that it opens a dialog and gets companies to talk to one another and find a way to standardize across the industry. We pay for the products. If we can’t get first-hand what the real data is out of microarrays, what difference does it make? These are not unreasonable demands that we are making. We are only asking to interpret our data with more ease and accuracy than we have in the past. Scientists in general don’t have the necessary funds to run even more replicates — like the statisticians are asking for — in this state of affairs with such little reproducibility and problems comparing across platforms. It’s such a minefield to wade through.

 

The Scan

US Booster Eligibility Decision

The US CDC director recommends that people at high risk of developing COVID-19 due to their jobs also be eligible for COVID-19 boosters, in addition to those 65 years old and older or with underlying medical conditions.

Arizona Bill Before Judge

The Arizona Daily Star reports that a judge is weighing whether a new Arizona law restricting abortion due to genetic conditions is a ban or a restriction.

Additional Genes

Wales is rolling out new genetic testing service for cancer patients, according to BBC News.

Science Papers Examine State of Human Genomic Research, Single-Cell Protein Quantification

In Science this week: a number of editorials and policy reports discuss advances in human genomic research, and more.