Skip to main content

UCSD s Gary Hardiman Discusses the Future of Core Array Facilities

Premium

At A Glance

Gary Hardiman, director, Biogem (Biomedical Genomics Microarray Facility), and assistant professor, Department of Medicine, University of California, San Diego (Feb. 2000-Present)

Education: 1993 — PhD, Molecular biology, National University of Ireland (Galway, Ireland).

1989 — BSc, (Honors) — Microbiology, National University of Ireland (Galway, Ireland).

Experience: 1998 - 2000 — Senior Scientist, Gene Expression Analysis Group: Axys Pharmaceuticals.

Research Interests: Gene expression analysis (microarray technology development and optimization).

 

With over five years of experience with microarray technology in both academic and private-sector settings, Gary Hardiman, the creator of BIOGEM, the UCSD department of medicine’s microarray facility, can claim a wide view of the development of the technology.

Hardiman brought some of this panoramic perspective to the book he authored,“Microarrays Methods and Applications: Nuts & Bolts” [just published by DNA Press, Eagleville, Pa., $42.50], and a paper in Pharmacogenomics, “Microarray technologies 2003 – an overview.” BioArray News spoke with Hardiman about the book, his experience in leaving the private sector to set up a microarray facility at UCSD, and what the future looks like for microarray core facilities like the one he runs at the school’s department of medicine.

How did you get involved with microarrays?

Six years ago, I worked with Sequana, a genomics company that was acquired by Axys Pharmaceuticals, which was later acquired by Celera. I was involved with [the company] Molecular Dynamics, as part of [a] technology access program, making arrayers relevant for what people were doing in the company. That got me into it. When the company closed, I moved to UCSD and set up the core facility of BIOGEM [which stands for Biomedical Genomics Microarray Facility].

How was BIOGEM funded?

The money came from a variety of sources, some local and some NIH. Chris Glass and other professors got together to buy microarraying equipment, getting an NIH grant, which the university matched. We bought an arrayer, the same as I used in industry, as well as a Qiagen liquid handling robot, and some scanners. As time has gone by, we have built on that list. Recently we purchased a machine from Genomic Solutions, a MicroGrid II for the higher throughputs.

What platforms do you use?

We do everything but Affymetrix. When I arrived at UCSD, there was already an Affymetrix core facility and we didn’t want to duplicate what existed there. And, there was enough separation that both cores could exist separately. BIOGEM was into making more custom-based arrays, which was harder to do with Affy then than it is right now. The rationale was, if an investigator had done Affymetrix experiments, they could use us to do more-focused, more-detailed study. We still do custom based arrays, at densities from 1,000 to 20,000 [probes per array]. Currently, we are starting to get into the area of chip-on-chip assays — promoter arrays. We have a consortium of people involving labs here and others who are starting to pioneer this technology. When it isn’t available commercially, there is a need for a core facility to help get the arrays they want made. [See BAN 6/18/2003]. From what I have seen, if a researcher is working with mouse and human, there are arrays available from Agilent, CodeLink, MWG and many others, or researchers can buy oligo sets and print them themselves.

What kind of volume in terms of arrays do you do?

That is a very difficult question to answer. We do some 500 to 1,000 CodeLink and Agilent arrays a year, which is a significant number. But there are lots of investigators who are working on organisms that haven’t attracted commercial interests. They amplify the genome or ESTs and provide us with sets that we print. The arrayers are constantly working. We have two full-time people, and three other people that work here part time. The way the labor has been divided is that we have one individual that deals with CodeLink and Agilent and we have another with a lot of experience with spotting. She is the hands-on expert with getting good arrays to people. And, we have a person who does informatics analysis, an undergraduate student, Ivan Wick. He is great, he has been here with me for a couple of years. He has the BASE database and the cluster of Linux servers we have up and running. We have a lot of commercial [applications] — I like [the] BioDiscovery suite for analysis, but in terms of databases, we opted to go with BASE, and are happy to go with it. We are big proponents of open source — if it’s free and cheap, we are interested. We are working closely with [the] supercomputer center at UCSD. There is an effort to do something on a bigger scale, to archive data and to provide analysis to researchers at UCSD.

So you have data from spotted microarrays, as well as CodeLink and Agilent arrays. That sounds like an integration/concordance nightmare. How do you deal with that data?

I think that is the biggest challenge facing micro-array researchers today. Generating microarrays is easy. But organizing and consolidating and extracting the data is more of a challenge. You have to make sure the data is good statistically. And, you have to have replicates, and make sure the data is behaving similarly when doing statistical tests. If it passes the statistical tests, only then, you can extract information. In terms of comparing the different platforms, in the book, there is one beautiful study by Philip Stafford and Ping Liu, who look at four different technologies — Affymetrix, Agilent, Amersham and Mergen, and they then look at what the microarray data looks like. One of the interesting things is that there are some genes common to the platform, and that overlap. Each of the platforms detects genes that are differently expressed and missed by other platforms. Each is useful and I think that if one can afford it, and has the samples, one can run on each of the platforms, Each will generate data that is not necessarily different but is of value. I think that facilitating these cross-platform studies is in the future of core facilities.

You talked about the future of core facilities. Can you expand on that?

The business model was originally for these facilities to be stand-alone and self-sufficient, and that the arrays would be an inexpensive alternative to commercial products. That model is flawed now because it is difficult to compete with arrays made in commercial environments with stringent quality controls. There has been a lot of evolution over the last three years, the technology has gotten much better and off-the-shelf arrays have gotten competitively priced, while making good microarrays is quite challenging. The technology has changed — the facility has changed as time has gone by.

So, core facilities are becoming more integrated into the laboratories of researchers that have a strong interest in doing post-genomic analysis. The whole concept of having facilities as a stand-alone business is not going to survive. But, these facilities are a vital resource in any university — they contain the equipment and the individuals who can get the information to those that want it – and there is value in that. Core facilities will become more specialized research centers rather than the standalone facilities that you find now.

We are going to see core resources that will just do data mining for people at the back end. At the front end, there is still going to [be] a need for specialized people in core facilities to do [microarray investigation]. One of the problems is the lack of adequate informatics structures to deal with the bulk of data. When I moved here, it was not too great a challenge to generate arrays. But we had the supercomputer group here. If they hadn’t been there, it would have scared me.

The field of microarrays is so dynamic, and the pace is only getting more rapid. How do you manage that for the microarray course you have organized for the last three years?

Since 2002, I have been involved with the university bioscience extension — I’ve been teaching there longer than I have been on the full-time faculty. Last year, I helped them put them together a three-day microarray course. I had been teaching a one-day class, but there is just so much material to cover in one day. So to do justice to the area of microarray technology, we decided to organize a three-day course and bring in experts in different aspects of microarray technology, microfluidics, and informatics to deal with the data afterwards. We got a great group together in 2002 to do the three-day course. The book evolved from that class. A lot of the people who had presented material wrote chapters. My job was to try to hound them to contribute chapters. The course was a great success last year, drawing 40 people from around the country. It was a pretty competitive class to subscribe to last year as the extension department decided not to go much bigger than that so that individual questions could be handled better than in a large seminar class.

We have a great group of people that work on microarrays here in San Diego that we can rely on and who will give good talks. We have Juan Yguerabide, a former professor from UCSD, who more or less invented nanoparticles. He always gives fascinating talks on the technology. And, we have Dave Weiner, another local scientist. They are the folks, and others, who have been with the course from the get-go. They turn up and bring us up to speed with what they are working on in the last 12 months. We also leave slots open for emerging technologies. We had Amersham give a presentation a year ago when they had just bought CodeLink.

Can I get you to look forward a bit to the future for microarrays?

One of the best things to come out is MIAME because it has definitely put more of an onus on the investigator to think about the experiments and make sure that the data available is in a format that it can be reproduced and to think about sample tracking, adequate sample annotation, before you even get to running the microarray. MIAME has been an excellent effort to standardize this.

One of the limitations of microarray technology is the fact that you are not dealing with absolute expression, you are looking at relative expression. The ability to quantitate mRNA and number of mRNA molecules per cell is something that a lot of researchers would like as it would make the data more comparable. But, it is hard to do that with microarrays. One of the problems is with every probe or feature, where you have hybridization of a probe to a target, the dynamics and kinetics of each feature will be different because of the nature of the sequences themselves. So, it is not going to happen with arrays. Hard to say what technology it will be, but it is probably going to be with realtime-PCR basis.

Yes, throughput is a limitation, it isn’t there with PCR, but I think that in terms of doing this absolute analysis, it is not going to happen with microarrays.

In terms of microarrays, you are going to see them going into the clinic. There will be a series of tests of interest to clinicians. You will see these biomarkers, the ones that are of greatest interest, appearing on different chips. You see a lot of P450 SNP-based arrays, that is just the beginning.

The Scan

Pfizer-BioNTech Seek Full Vaccine Approval

According to the New York Times, Pfizer and BioNTech are seeking full US Food and Drug Administration approval for their SARS-CoV-2 vaccine.

Viral Integration Study Critiqued

Science writes that a paper reporting that SARS-CoV-2 can occasionally integrate into the host genome is drawing criticism.

Giraffe Species Debate

The Scientist reports that a new analysis aiming to end the discussion of how many giraffe species there are has only continued it.

Science Papers Examine Factors Shaping SARS-CoV-2 Spread, Give Insight Into Bacterial Evolution

In Science this week: genomic analysis points to role of human behavior in SARS-CoV-2 spread, and more.