Skip to main content
Premium Trial:

Request an Annual Quote

Roger Bumgarner, Director, UW Center for Expression Arrays


Received his PhD in physical chemistry from the University of Arizona in 1988.

Completed postdoctoral Bantrell Fellowship in the division of geology and planetary sciences, California Institute of Technology

Served as an assistant professor and research scientist in Leroy Hood’s molecular biotechnology laboratory at the University of Washington from 1992 to 1998.

Currently serves as a research assistant professor at UW. Chaired the Northwest Microarray Conference in Seattle this August.

QHow did you go about setting up the University array facility?

AWe decided the best way to operate was in a small group. In the literature, it sounds like you get some robots, buy some slides, get some stuff, and start making arrays. In fact, you have to put a lot of infrastructure in place before anything works. Once we had arrays that were working, we opened the facility to the rest of the campus as a cost center.

QOperating a ‘cost center’ means you charge researchers money for the arrays. How did you figure out what to charge them?

AWe went through a cost analysis, including labor, bioinformatics support, and amortizing our equipment cost because I know I’m going to have to replace everything in my lab over three years. When we do that it costs us about $260 to make each 15,000-spot array.

QHow do you work with researchers on array experiments?

A[They] do their own probes, hybridizations, and washes, then send the slide back to us for scanning. We’ll make the data available to them on a central file server, along with data analysis tools. Typically before that, we provide them training in our lab, which we have found to be a really critical component. When they go and do it in their own lab and they have problems, they are much less likely to blame it on us.

QWhat kinds of arrays do you offer in the facility?

AWe make human, yeast, mouse, and Pseudomonas aeruginosa arrays. We’re also now going to be adding Affymetrix services because there are a lot of people who want to do Affy arrays, and Affy arrays fill in gaps. We don’t do rat clones, or E. coli arrays, and Affy has those.

QHow do you manage and analyze the data?

AWe have a fairly large bioinformatics effort … essentially five full-time software engineers. We go through 600 gigabytes of storage space, so we’ve had to set up something like a dot-com at our core facility.

QWhat kinds of arrayers do you use?

AI’ve got an arrayer from Molecular Dynamics, Amersham Pharmacia Biotech. I also got a Genetix arrayer, an IX. In the interest of disclosure, I am on the scientific advisory board of Genetix. We do most of our spotting with the Amersham arrayer, but I am really making sure that I have two arrayers so I am not dependent on any one system or any one vendor in the long run.

QHow have the arrayers worked?

AFrom every manufacturer we’ve ever touched there was a significant issue with surface variability. All of these technologies are bleeding edge, or leading edge, or beta. Every manufacturer has a software bug or a breakdown. Amersham has been pretty responsive when I’ve had problems. Which is part of the reason I am still working with them.

QWhat is your biggest challenge with microarrays?

AOther than RNA, it is getting to a reliable list of genes. My guess is that somewhere on the order of 70 percent of all the genes that are published as differentially expressed are not reproducible. And any paper that only did one array and used a simple thresholding technique, which is the de facto standard, is going to produce a significant number of false positives.

QWhat is your wish-list for future microarray tools?

AInexpensive oligo arrays with oligos that are sufficiently long to offer a good signal-to-noise ratio, but oligos that have been selected to represent different splice variants.

I also need a different kind of database. I would much rather have the annotation lead forward to the proteins than backward to the genome.