Skip to main content
Premium Trial:

Request an Annual Quote

Researchers Develop New Microarray Experimental Design that Improves Data Quality, Reliability


Noisy data, complex bioinformatics requirements, and a need to confirm findings using independent techniques such as quantitative PCR are all factors that have negatively impacted the use of microarray technology, according to the authors of a new paper.

To combat these issues, the researchers developed a new experimental design that produces calibrated microarrays capable of directly measuring gene expression as well as gene copy number, without a need for follow up verification. The authors discussed the approach in a new PLoS ONE paper published this week.

Peter Noble, an associate professor of microbiology at Alabama State University in Montgomery and co-author on the paper, told BioArray News that the project developed out of the authors' frustration with existing array technology.

"DNA microarrays have a lot of errors," Noble said. "We would do microbiology experiments, working with many different types of DNA microarrays, and often probes would light up that were not even complementary," he said. "This is a big problem for everybody," Noble added, "because people are using DNA microarrays all of the time."

Noble worked with colleagues at the University of Washington in Seattle, the Max Plank Institute for Evolutionary Biology in Germany, and the University of Reading in the UK, with a focus on identifying the noise component of the experimental procedure. They narrowed in on two sources of variance – issues related to probe binding and poor probe performance.To address the variance in probe binding, they hit upon a strategy of replicating array probes. For gauging poor probe performance, they decided to calibrate the arrays based on a dilution series of target molecules.

"We decided to average probes together, ten at a time, and take the mean value," said Noble. "There is also a lot of variability associated with target labeling techniques, so we took the same sequences and labeled them ten different times, and pooled them together." Noble and his colleagues then diluted the pooled sample, placing it at different concentrations to the replicate probes on the array, in order to calibrate each probe's performance. "For every probe, we can perfectly calibrate it, we can remove non-responsive probes, and by doing so we remove lots of the errors that are associated with DNA microarrays," he said.

To test out the new experimental design, the authors evaluated two custom arrays: one based on 25-mer probes from an Affymetrix design and the other based on 60-mer probes from an Agilent design. They found that "significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment."

Given the enhanced performance provided by such an experimental design, the authors said in a statement that the method "alleviates the need for normalization procedures and reference standards," and argued that such calibrated microarrays are "truly analytical instruments, similar to pH meters," and that they yield "accurate, reliable, and repeatable results that are currently not attainable with next-generation sequencing technology.

"I have my own 454 sequencer and I have used it a lot, but you can't really quantify things by 454, so the best way to do that for the next five or ten years is by using DNA microarrays, especially now that we have a method that works," Noble said.


Some of the benefits of the proposed new experimental design include a decreased reliance on complex bioinformatics, as well as independent confirmation methods. At the same time, the authors noted that there are some disadvantages to the approach.

"Right now, there are 50 different software packages you can use for DNA microarrays," said Noble. "But we provide in the paper [links to] three simple programs to average the probes, calibrate the arrays, and calculate the target concentrations," he said. "It takes out all of the mysteries of DNA microarrays so it's accurate now."

As for confirmation techniques, like qPCR, Noble claimed that they are "not necessary" for researchers who use the revised experimental microarray design. "That's the beauty of it," Noble said. "This might be good for diagnostic services that regularly use the same microarray, because once it is calibrated, you don't need to calibrate it anymore," he added.

However, the suggestion that probes be replicated to eliminate variance could cost researchers some real estate on their chips. In the paper, the authors suggest probe replication of at least 10 probes for 25-mer arrays like those sold by Affymetrix, and 6 probes for 60-mer arrays like Agilent's.

"An evident drawback of replicating probes is that it limits the number of different probes that can be surveyed on an array," the authors wrote. "This presents a trade-off between the quality of signal and the number of different genes that can be studied, at least for array designs that cannot compensate this with very high probe densities," they wrote.

"Certainly, there is a problem, because it limits the number of probes that you can put on a microarray," Noble acknowledged, "but you are saving because you are getting a better quality signal."

Another drawback might be that researchers who rely on the approach would have to redesign their arrays to account for the probe replicates. "It means that everybody would have to follow the example that we provide," said Noble.

As for what comes next, Noble said that the group's work vis-à-vis microarray experimental design has probably come to an end. "We have been working in this field for ten years, and we have ten papers published, but we have solved the problem of getting the noise out of DNA microarrays, so you could say that we put ourselves out of business."