At A Glance
Doug Amorese
1996-Present — R&D Manager for Hewlett Packard, then Agilent Technologies, Bio-Research Solutions Unit
1987-1996 — Automated DNA Sequencer and Microbial Detection, DuPont..
Education: 1976 – BA, Colgate University
1981 – PhD, Biochemistry, Colorado State University
Doug Amorese is R&D Manager at Agilent Technologies’ Bio-Research Solutions unit, where he is responsible for developing and commercializing Agilent’s DNA microarrays and the instrumentation and software for the platform. Amorese has held this position dating back to the days before Agilent spun off from its parent company, Hewlett-Packard, in 1999.
After Agilent announced on Oct. 2 that it was shipping its single-microarray, whole-human-genome products to beta customers, Amorese spoke to BioArray News to discuss the arrays, and to provide an insight on the validation processes Agilent uses to produce the probes that it deposits in situ on its 60-mer oligonucleotide array products.
How would you characterize the probe quality for your newest arrays?
What we have done is combine the expertise that we have developed in printing and probe design and validation with a maturing understanding of the human genome to generate a product that represents that genome as well as any product today can, and it starts with the ability to have flexibility in design and identifying the probes that you are going to choose to represent the genes. We have capitalized on the probe design and validation technology that we used to develop the first two human arrays, the 1A and 1B.
Does the ink-jet deposition process that Agilent uses present any challenges for printing the human genome onto your arrays?
At this density, there is no ink-jet hurdle. The hurdles we had to overcome were: Do we understand the genome well enough to make an array that people are interested in looking at [on] one piece of glass?
What is the innovation in your product?
What we are doing is called close-pack printing: You have two regular array patterns in an x-y grid. We are taking one pattern and offsetting it half a row, and half a row across. The feature-to-feature spacing has gone down very little and background subtraction is largely untouched by this change. We had to make sure that the feature extraction software would align to the new grids. And not every feature lights up. This is expression profiling, so not every gene is expressed in every tissue, which leaves gaps. If you think of the software as a connect-the-dots diagram, some dots should not be connected.
Will you discuss your probe validation efforts?
We started this with the first human oligo arrays.
When we select a transcript to design against, we compare that transcript to other defined transcripts that exist in other databases, when it was sequenced, and map it to the human genome. Then we ask: Is there an intron/exon junction close to this? How does it compare, is it a good match, or a perfect match? For the sequence, good is not good enough, and perfect is the standard.
We identify what we believe to be a conserved region of the gene, or exon; we design 10 probes to that [area]. We make custom arrays with all of the candidates [exons] represented on them. Then we have identified specific tissue and pairings. We try to select two things that we believe are optimally different from each other, with the most differential expression. We have 10 pairs. We label one side of the pair in one color and the other in another. Then we do hybridization to all of the candidates to see if a probe is specific to a gene. Take, for example, a scenario where all 10 candidates are specific to a gene and are differentially expressed. Then all 10 of them will cluster, depending on the quality of the probes, either tightly, or stretch out in the dimensions of signal intensity. We plot signal intensity vs. log ratio and we like to find all of them [probe candidates] falling into the same group. Occasionally, two of them won’t cluster with the other eight. That probe typically has low signal and doesn’t hybridize well to that target, and gets ruled out.
When we have gone through the clustering exercise, we pick a single probe to represent each gene. For the 1A array, on the order of 80 percent of those probes were validated as biochemically detecting a single specific gene.
Can you make adjustments on the fly?
For the HAC 1A, there already is an adjustment. It is now HAC 1A, rev 1B. The name is not very catchy. After we finished the human design of the first human oligo array, more genes got recognized, and the annotation got better. When dealing with tissues, we have two different transcripts, one in one type of cell and one in the other, and we try to sort them out.
We can generate so many candidate probes so easily, that we can explore these things. And, we synthesize in situ as needed. We don’t have that fixed cost of having to buy [the] 180,000 different oligos in Human 1A over the two different designs. If you were buying from an oligo house, that would cost 20 cents a base, and it would be an incredible expense for a 60-mer array. That’s 12 bucks an oligo, and you have 180,000 of them. And then, you are only using 10 percent of them. For another type of array manufacturer, you couldn’t do that, it doesn’t make business sense. We make it in situ, as much as we need on that single array.
Can you talk about how you transfer the technology from R&D?
After the validation process, the rest of the process is very computer driven. Mapping to the genome is computer driven, and so is probe selection.
After we have identified the transcript of interest, we send it to another program. We have trained our software, based on our previous experience, on what to look for in a good probe, and it kicks back to us. Our order fulfillment group takes that design and determines what gets printed, and in what order, from a very complex image.
We have released so many different arrays now that it is a smooth and seamless process. We create a design file, which is like a huge PowerPoint, which we send to manufacturing, and they send back the arrays.
What kind of computing platform do you use?
We have old HPs, mostly PCs. Most of the software that we are using was developed in house. We leverage what we can find, but as it turns out, a lot of our needs are unique. We do have some folks that know something about computers.
What kind of feedback are you getting on the new arrays?
We have beta testers and we are using it internally. We have already used it in two external collaborations, and we have received good reports back. We are looking forward to getting the feedback from folks who are purchasing these.
Let’s talk about instrumentation. What are customers asking for out of microarray scanners?
The large improvement in the scanner that we tackled last year was more of an issue of the feature extraction software. People love the software, the automation and scanning and feature extraction, but the home-brew array users had to feature-extract using another package. The new version of our software has a feature extractor that allows users to process arrays made by home brew. That has been well received.
With respect to our scanner, we still have the highest sensitivity and most automated, fully integrated platform around. We continue to look at ways to improve it further. Our scans take a shorter period of time than anyone else’s, certainly in the 1x3 area. Things like sensitivity, a researcher can never have enough. We are fortunate that our array has such low background that we don’t have to be as concerned with array noise, or glass background fluorescence.
What’s your view about the microarray market in this time of innovative new products such as the whole-human genome arrays being produced?
It’s an exciting time, no question about that. I think that the end users, the customers, are getting more sophisticated about the type of questions they are asking. The learning is really interesting, whether it’s cancer profiling or studies in toxicology to look at pathways. The genome is maturing nicely, the informatics tools to mine that data are coming along well, as is the technol-ogy for making reliable high quality arrays. These factors point in the same direction: This technology is going to come into its own.