As researchers test microarray platforms to try to separate the wheat from the chaff and to decide which platforms to run with, they continue to be frustrated by the lack of consistent results across varying systems.
Earlier this month, a team of researchers at the Jackson Laboratory published an in-house comparison between mouse Affymetrix arrays, cDNA arrays, and homemade oligo arrays in the Journal of Biomolecular Techniques. “We wanted to do this to figure out what to do in our own lab,” said Gary Churchill, the paper’s senior author and a co-director of Jackson’s microarray core facility.
Results from the Affy chips (Affymetrix Mouse Genome Expression Set 430 GeneChips, A and B) and the oligo arrays (Compugen set of 22,000 oligos) compared well, prompting the lab to settle on these two platforms for at least the next two years. “We are finding high concordance [between the platforms], [and] we get basically the same gene lists where they overlap,” said Churchill. Meanwhile, the lab has upgraded to Affy’s new version 2 mouse arrays and is even happier with those than the older version tested.
cDNA arrays, on the other hand, though they were repeatable, gave results that differed markedly from the other two platforms. The researchers did not compare all three platforms against a common standard, such as RT-PCR. But, “We found that consistency across two platforms was a major indicator that the third platform wasn’t performing up to standard,” Churchill told BioArray News.
This doesn’t necessarily mean, though, that cDNA arrays do not have a place anymore. For certain organisms whose genome hasn’t been sequenced yet, no other arrays might be available, he said.
The Jackson researchers found they can compare results from two different platforms in-house, but analyzing results across different labs is another story, Churchill cautioned. “I am not a big fan of the idea that you can simply dump all this data into one big Mac and make sense of it,” he said. “Planning cross-laboratory studies is essential to their success.”
Finding out whether results from different labs and platforms are comparable was exactly the aim of a similar study conducted by the NINDS/NIMH Microarray Consortium that was presented at this year’s ABRF meeting (see BAN 3/3/04) and will soon be submitted for publication.
The consortium consists of three centers that National Institute of Neurological Disorders and Stroke and the National Institute of Mental Health designated to provide microarray services to researchers funded by the two agencies: the University of California at Los Angeles’ micorarray center, headed by Stan Nelson; Duke University’s micorarray core, led by Pate Skene; and the Translational Genomics Research Institute microarray center under Dietrich Stephan.
“The first order of business, we felt, should be to show that the services provided by the centers were comparable, because each center was offering different types of microarrays,” said Barry Merriman, a research professor at UCLA affiliated with the core facility who led the study.
The researchers compared Affy’s Human Genome U133A array; Amersham’s (now GE Healthcare’s) CodeLink system with the Uniset Human 20K I oligo set; Agilent’s Human 1A oligo microarray; arrays printed at the UCLA core using Clontech’s Atlas Oligo Human 13K set; arrays printed at Duke’s core using Operon’s Human Genome Oligo Set, v. 2.1; and spotted 33k cDNA arrays produced at the NIH microarray core.
The main conclusions: The major commercial platforms give repeatable results but only show a modest correlation to each other. Also, the study corroborated the Jackson Lab finding that researchers need to be cautious about cDNA arrays.
Four platforms — Affymetrix, Amersham, Agilent, and Operon — gave repeatable results, and showed a 70 percent correlation between each other or with an RT-PCR assay. “In simple terms, 70 percent of what you are measuring is real gene expression, and another 30 percent of it is platform-specific artifacts of some sort,” Merriman said. “It’s acceptable. You would like it to be better, of course, but that’s fine.
These inter-platform differences are due to the limits of hybridization-based experiments in general, Merriman said, and to the choice of probes and their quality. As commercial arrays go through new versions, some of this might be sorted out, he reasoned.
cDNA arrays and Clontech oligo arrays did not fare well in the comparison, agreeing only 40 percent of the time with RT-PCR. While cDNA arrays may still be useful for the analysis of simple genomes, “a good lesson to take away from this is to definitely move away from cDNA arrays for mammals,” Merriman said. “There is clear evidence that they are not working as well.”
He recommended that at this point, researchers should not compare data from different platforms. “Our conclusion is that that’s not a profitable thing to do because there is only a modest correlation between the platforms,” Merriman said. Rather, individual researchers should choose a platform and stick with it if they want to make meaningful comparisons, he said.
As to which platform to choose, Merriman does not want to make any endorsement. “At this point, you really can’t say one is a clear winner over the others,” he said.
It doesn’t appear that much has changed since the October 2003 publication of a paper “Evaluation of gene expression measurements from commercial microarray platforms” in the journal Nucleic Acids Research detailing the results of an experiment comparing microarray platforms.
One of the paper’s authors, Margaret Cam, director of the microarray core facility at the National Institute of Diabetes and Digestive and Kidney Diseases, told BioArray News last year that in comparing three commercial microarray platforms — Affymetrix, Agilent, and Amersham — researchers found “very little overlap in the types of data in terms of differential gene expression” (see BAN 10/1/2003).
Cam did not return calls seeking comment for this article.