Skip to main content
Premium Trial:

Request an Annual Quote

Edward Wagner of Cal-Irvine Holds the Line on Costs of Microarrays

Premium

At A Glance

  • Edward K. Wagner
  • Professor of virology and molecular biology, department of molecular biology and biochemistry, University of California, Irvine.
  • 1962 — BA, biochemistry, University of California, Berkeley.
  • 1967 — PhD, biochemistry, Massachusetts Institute of Technology.
  • 1967-1970 — Postdoc, animal virology, University of Chicago.

Edward Wagner, professor of virology and molecular biology at the University of California, Irvine, has been researching RNA synthesis in the herpes virus for more than 30 years.

Wagner, 62, is an early-adopter of microarray analysis and in 2000 after being funded by the US public health service for most of his career, won an NIH grant to create DNA microarrays for the study of neurotropic human herpes viruses.

BioArray News recently spoke with Wagner to learn about how he is adopting microarrays for his research.

How did you start in microarray analysis?

We began working with them in the late 1990s in collaboration with a colleague at Scripps, Peter Ghazal, who has now moved to Edinburgh and is the director of the Scottish Genome Center.

When the original paper by Botstein and others came out in Science, I said ‘I want to do that with my virus,’ but I couldn’t do it with an Affymetrix chip. So, I had to wait for the oligo method to come out. Peter published that in 1999 with cytomeglavirus. I called him and said ‘we are interested’ and he offered a collaboration and got us off to a good start.

How do microarrays fit into your research interests?

Its kind of nice to tie everything in a package by being able to globally envison what is going on. It’s been very rewarding. If one were to isolate a new human herpes virus, or any large DNA virus, nowadays you have an approach towards learning a lot about it by sequencing the genome and doing some chip analysis. You can work out the details of regulation in a few months, which took us 20-some years to do.

The importance of our research, for the microarray community, as opposed to the viral community, is our ability to essentially do very careful quantitative tests of different approaches towards microarray analysis using a system that we know shows differential gene expression and the restricted expression of certain genes under certain conditions. Basically, we have a very good testbed for checking methods in a quantitative way.

What have you learned?

The quantitative strength of the [microarray] system is quite powerful, as good as any other method we have for quantitating viral gene expression. In a space of a reasonably short time, one can get a complete picture of viral gene expression and a good picture of how cellular genes are perturbed by viral gene expression.

Can you describe your platform?

We use oligos for viral probes. Those are designed from known sequences of HSV-1 and, more recently, HSV-2, a chip that we are tinkering with right now. We select large oligos, we use 75 mers, and have them synthesized commercially, since prices are quite reasonable now. And we use those oligos to print up our glass slides for viral genes.

For cellular genes, the approach we favor, along with our collaborators in Edinburgh, is to use the cDNA sets that have been made available. For example, we are using a 5,000-gene subset of mouse-expressed sequence cDNA sets to look at the effects of viral infection on mouse gene expression. We can hybridize those under the same conditions as oligos and we can get quite representative data. For genomic arrays of cells and tissues, we think the cDNA arrays have advantages, if they are already available. If you want to look for a specific gene, and you have the sequence, oligos clearly form an effective way for looking at specific genes.

What kind of equipment do you use?

We don’t use the Affy system. Affy is just too expensive for the moderate sized lab. My funding is not inconsiderable — we pull in $400,000 to $500,000 a year in research funding. But, in order to do a statistically complete analysis, it wouldn’t take very long to spend that.

Our chips are manufactured at the Gene Technology Institute at Edinburgh: we have a subcontract with them. I spent six months on a sabbatical cementing the relationship with Scotland. It’s a good way to go. One has to have the feeling that you have contact with the quality control and they know what you want. When you are spotting microarrays, quality control is a major issue. All you have to have is one bent pin or one dry well and you have a problem. We don’t have the manpower to do that kind of quality control. Most university centers, unless they are really set up to do it, can’t do it either. If you want to print chips continuously over a period of time, quality control becomes very important.

How long does it take to get your chips?

Designing a chip, like the HSV-2 chips, took a week of hard work on my part because I had to go through the genome and make sure I got everything right, and in the right order. Because I know these genomes so well, it’s easier for me just to spot them by eye. Once you have that, you go to a commercial company, with turnaround in a week, and then ship them to Edinburgh, which takes about three days by DHL, and they can print them up in a week. If we are really in a hurry, we can get them back in three or four weeks.

What are the costs involved in this process?

One of the great joys of this, is that you can [buy] synthesized oligos so cheaply. When I wrote my [NIH] grant, I figured it would cost about 60 bucks a chip and I’m down to less than $18 a chip now. The costs are low because you can make, essentially, an infinite amount of oligos for one price. You order 25 micromolars, the smallest amount that they will make in a 96-well plate, on the order of $60 per oligo, times 100. So, that’s $6,000 for 96 probes — that’s a lot of probes for a virus. Let’s say you wanted to make three separate oligos for every gene: we like to duplicate them. So, you are still talking $18,000 for essentially an infinite amount of oligos. And, the more you print, the cheaper it is.

The big holdup on these chips is the price of the fluorescent dyes. We have just gotten involved in a collaboration study with Qiagen where we’re using a colloidal gold and silver binding [agent], and a light-scattering analysis technique. That looks like it is going to work and bring the price down by a factor of 10, because you use one tenth the amount of input RNA, and the reagents are 20 percent of the price of the fluorescent dyes. I’m very excited about the Qiagen approach. I have been consistently impressed with how comparable the overall data are between the two methods. I don’t think you could take a number of experiments done using colloidal gold, and a number of experiments done with fluorescent dyes, normalize them together and mix them as a single experiment. But, if one would compare the statistical test group in both methods, you would find very comparable results in the end. I do not want to argue that this will be the be-all and end-all, this is just one other method for looking at aspects of gene expression. It looks to me that, quantitatively, it might be quite reliable.

What technical hurdles have you had to overcome?

The design has been remarkably straightforward. I just can’t believe how well it worked. In designing the oligos, we had to go through [the herpes simplex genome] and scan for regions that were not too different, that had a reasonably consistent base composition, or were a certain distance from the 3’ end of the transcript unit. The first chip we made worked very well, then we began using polyadenylated RNA instead of totalRNA, which worked even better. So I haven’t had any technical problems at all. In using the colloidal, we are down to about a 10th of a microgram of poly-RNA for an experiment; and you are talking about real tissue now, instead of pooled tissue samples for real analysis.

Tell me about how you handle data analysis?

I’m basically keeping everything on Excel spreadsheets. For quality control, we do a number of replicate experiments and combine the information to establish the statistical parameters. We don’t do anything very fancy with the statistics: Basically, we just ask for relatively tight standard deviation and group correlation co-efficients.

Excel works well with up to about 5,000 gene arrays. Beyond that, you have to get into other stuff. I’m holding back on that. We are playing with GeneSpring and several other approaches. I take the information that comes out in Excel accessible form and I do a number of macros to sort it as I want. At the end, you have each experiment as a spreadsheet. When the experiment is completed, we normalize to the 75th percentile, and choose one experiment that we say is our nominal experiment and we normalize the other experiments to that. That allows you to bring your signal strength to each other. What it does, by going to the 75th percentile, you throw away your low numbers, you de-emphasize those that will be background.

The Scan

For Better Odds

Bloomberg reports that a child has been born following polygenic risk score screening as an embryo.

Booster Decision Expected

The New York Times reports the US Food and Drug Administration is expected to authorize a booster dose of the Pfizer-BioNTech SARS-CoV-2 vaccine this week for individuals over 65 or at high risk.

Snipping HIV Out

The Philadelphia Inquirer reports Temple University researchers are to test a gene-editing approach for treating HIV.

PLOS Papers on Cancer Risk Scores, Typhoid Fever in Colombia, Streptococcus Protection

In PLOS this week: application of cancer polygenic risk scores across ancestries, genetic diversity of typhoid fever-causing Salmonella, and more.