At A Glance
- Vishwanath (Vishy) Iyer
- Assistant professor, molecular genetics and microbiology, Institute for Cellular and Molecular Biology, University of Texas at Austin.
- Education — 1996: PhD, Harvard University
- 1989: MS, University of Baroda (India)
- Postdoc: Stanford University Medical Center (Pat Brown lab)
- Research Interest — Genomics, genome-wide transcriptional programs and mechanisms, microarray technology.
Vishy Iyer is one of the early microarray alumni of Pat Brown’s lab at Stanford. When he arrived at the University of Texas at Austin two and half years ago, he proceeded to set up a spotting microarray facility, building it from scratch on the blueprint he learned at Stanford. Today, he shares that knowledge, and his equipment, with colleagues, and others.
This June, Iyer will be the lead instructor of the popular microarray course at Cold Spring Harbor Laboratory. Iyer will teach a class of 20 students how to build an arrayer and the protocols of home-brew microarraying.
BioArray News caught up with Iyer recently to discuss his lab, and his plans for teaching the course at Cold Spring Harbor.
Could we describe you as a proselytizer for spotted arrays?
Other than doing my research, I’ve done nothing but that. It’s passing on the torch. When I came here, I had to personally train all the students in the lab. I have graduate students, PhD students, and they just pick it up, the same way that I picked it up several years ago. Now, when someone comes to me and says: ‘I want to do an array experiment,’ I say: ‘talk to my students,’ and they totally help out. It gets passed on. When I came here, I was the only one who had done any microarray experiments. Now there are people — students in my labs, and collaborators all around Austin — who have done more experiments than I have.
One undergraduate student came to my lab and helped us build the arrayer. He learned how to build it and then he went and built one on his own for his dad who is a doctor and runs a lab in Houston.
How long did it take you to set up your laboratory?
I came in September, and we printed our human array in September of the next year, so it was a year before we printed our first big human chip. I basically had to build the lab from scratch. We had to amplify all our yeast clones by PCR, which took a few weeks; and doing the human cDNA took a little bit longer — we had 47,000 cDNAs that had to be replicated and amplified.
Didn’t that setup time have a cost, even just in opportunity-cost?
There is always some setup time. Let’s say I wasn’t doing spotter arrays, and I came here and wanted to do Affy arrays. I would have to pay a huge amount of money to get the Affy hardware in the lab, and then I would have to pay them for every chip I used. So, right now, it’s a small investment of time, a few months to a year, but at the end of it, we have a setup where, basically, it costs me virtually zero dollars to print an array.
What is the culture of your lab?
We are very open. Everything we do is described on the web. We have collaborations with a number of labs that use our setup to do their own thing. We give them the initial training, and it’s all very informal. [People] learn to use the arrayer, the scanner, collect data, do experiments, and analyze data. My involvement is to make available all the resources that we have — the arrayer, the scanner, the clones, all of which I purchased with my startup money. It’s a common-use facility, but it’s not a service facility. We don’t run samples for people. It’s very similar to like it was in Pat Brown’s lab in the olden days.
With all those different labs sharing your machines, there must be some cross-pollination. Is that so?
[My lab] is in this institute that has people from different departments, — chemistry, molecular biology, engineering and computer sciences. It’s not just arrays. There are people who are using our arrayers who are not doing the same kinds of experiments we do, or they are just trying to develop new ways to use arrays. For instance, some are using aptamers, which are small RNA or DNA oligos with very specific properties that will bind some small molecules or proteins. This is not the only place that is trying them, but that is an example of this kind of crosstalk. My expertise is not with aptamer chemistry, but once someone has that expertise, they can use our setup to print it and use it for novel applications.
My lab is focusing on a lot of arrays. Our biological questions are [about] control of gene expression and transcription. We are interested in using high-throughput protein identification by mass spec to look at transcription factor complexes. It’s something I probably wouldn’t be doing if I was by myself in my own lab. Given that I have colleagues with expertise and interest in those areas, it’s great. It’s a very symbiotic relationship.
How is the microarray course that you will be teaching at Cold Spring Harbor different from the teaching you do at Texas?
This year, the main thing that has changed [at Cold Spring Harbor] is the instructor. We will still let the student take charge. We have an experiment where we give the students RNA samples and protocols for labeling and hybridizations to do on their own. At the end of the course, they have to figure out what we gave them, based on expression profiles. We will spend a little more time on data analysis, looking at more ways of analyzing array data — not only what the students create in the course, but also the publicly available data.
We do the teaching at Cold Spring Harbor in a much more structured way. Here, they come in and want to talk about how to set up an experiment, and a week later they come back in with some clones, and they want to know how to print. In the microarray course, all of that is packed into a short time. The same kind of information is imparted, but the pace is different. Here, I don’t teach so much about building the arrayer. We have one.
We do the arrayer building as part of the course at Cold Spring Harbor so that people can go back to their labs and show others how to build one, so it just keeps going. Building the arrayer is a great exercise for people.
What happens to the arrayers you build in the Cold Spring Harbor course?
One of the students will usually buy it. Some of them are setting up labs. Last year, one of the instructors was starting her lab, and she purchased it. Cold Spring orders all the parts, and the purchaser reimburses them for that. It’s about $40,000.
There is a legend that Michael Dell started his business in a dorm room at Texas. Is anybody building arrays in their Austin dorm room these days?
We joke about that but, so far, there are none. But, it wouldn’t be too hard setting up a small business on the side. The old website that Joe (DeRisi) has, [which] describes how to build an arrayer — that’s the older generation of arrayer — and so many people have used that to build one; it just gets propogated.
What’s the biggest run of arrays that you have done?
We print 260 chips in one shot. We have printed over a thousand human arrays, a couple of thousand yeast arrays, and the same number of mouse arrays, which come from a smaller set of clones — we have 15,000 mouse clones.
What would you like to see changed or improved in microarray technology?
One change that I’ve seen is that when arrays were developed, a big motivation was to look at expression profiles in cancer. Today, any lab that is studying any biology problem is realizing the value of looking at the whole genome. There are people here studying very basic questions about prokaryotes who are using arrays. People studying embryonic development are using arrays. It’s affecting more and more different fields, and that’s what is going to expand in the next few years.
Analysis is still a bottleneck and it frightens a lot of people, but I think we’ll see more and more automated analysis that will build pathways and draw biological predictive models based on array data. Now, if you are looking at big clusters, and going to literature to look up genes one by one, that’s a slow process. There are lots of clustering programs and they are very fast. It takes seconds to run a big cluster, but that’s not the end of the analysis. What happens next is really limiting. [To look at that process] we will do some development in the area of drawing higher-level inference from large bodies of data.
Is there any scenario where you would get rid of your equipment?
Totally. The issues are price and openness. The day that I can buy an array that is almost as cheap as I can make on my own, which is to say, not a zero cost, but something like around two dollars, a price where you don’t have to think about using an array in an experiment. You don’t have to say, ‘this is going to cost me $400, so I better get it right.‘ That is not the way to promote bold experiments and new and crazy stuff. If I can do that from a commercial source, then that will be great and I will stop using my machine.
Another issue is openness. When we make an array in the lab, we pretty much know what everything is. You can’t always get the same from a company.