Skip to main content
Premium Trial:

Request an Annual Quote

Mike Snyder of Yale, on the Yeast Localizome and Protein Arrays

Premium

AT A GLANCE

NAME: Michael Snyder

AGE: 46

POSITION: Professor and chair of molecular, cellular and developmental biology and professor of molecular biophysics and biochemistry at Yale University; co-founder of Protometrix

PRIOR EXPERIENCE: Postdoctoral fellow with Ronald Davis at Stanford University (1982-1986), PhD with Norman Davidson at the California Institute of Technology (1978-1982)

In this first section of a two-part interview, Mike Snyder describes how yeast genetics and cell biology led him into proteomics.

What did you do as a postdoc in Ron Davis’ lab at Stanford?

That’s when I learned [how to use] yeast. At that time it was the only organism with which you could do homologous recombination, so it made it a great organism for trying to study all kinds of problems. I was involved in setting up lgt11 cloning and some other technologies. That was cloning genes with antibodies — it might have been some of my first introduction to technology development. We actually set up expression libraries, believe it or not, so it has relevance to my work 15 years later I guess, but it involved expressing proteins from a l bacteriophage. To try and clone your gene, you would screen with an antibody. You detected the clone by virtue of the fact that it was expressing the protein that was recognized by your antibody. It was actually one of the first successful expression systems, and I was one of the inventors of that technology.

 

How did you get into genomics and proteomics?

When I arrived at Yale [in 1986], we studied both chromosome segregation and cell polarity. Since I knew how to clone genes with antibodies, we were screening a collection of autoimmune sera. This is a way of getting probes against lots of different proteins with the goal of trying to get probes against processes you want to study. We screened this collection on human cells, we found all kinds of really neat staining patterns, some cross-reacted with yeast, and we cloned out genes in humans or in yeast.

The way we transitioned into genomics and proteomics was, we realized that this is a very inefficient way to find genes of interest. [In the late 1980s] we realized that we could actually tag all the proteins; we came up with this transposon tagging method [for tagging genes and proteins]. The transposons were built in such a way that you could set up reporters to gene expression, you could delete the transposon and leave behind epitope tags, and the transposons would give disruption phenotypes if you put them in the haploid setting. Believe it or not, that was actually the first functional genomics project in any organism. The nice thing was, for a very limited amount of resources, one person could study gene expression, protein localization, and gene disruption phenotypes, although we had to do a lot of informatics to keep track of all the insertions and see what genes had been tagged. We just published the latest rendition of this [in Genes and Development] where we have tagged over 60 percent of the yeast genes, and we localized most of the yeast proteins. So we were able to present this “localizome,” which turns out to be really valuable information in many respects. The first rendition of this we published in 1994; we had a hard time getting it funded.

It turns out that a lot of [interaction] information [from large-scale datasets] is fairly error-prone. [Our] localization information is not perfect either, but we at least know how accurate it is. We went through the two different [yeast protein interaction datasets published in Nature last February], and it’s pretty clear that only something like 20 percent of this data matches. The data that matches is almost always in the same subcellular compartment, and the data that doesn’t match goes almost always into a different subcellular compartment. It illustrates that if you just went into one dataset and filtered it in with [our] localization [data], you would increase the accuracy of either of these datasets dramatically. It’s a perfect illustration of how we need all of these datasets to get at the real interactions.

 

Why use protein chips to study protein binding and functions?

The nice thing about protein chips is that you really control the conditions, you are in charge. You can change concentrations, which lets you look at a whole spectrum of affinities. Obviously these are all in vitro assays, and you ultimately have to verify everything with in vivo assays, but it’s a great way to find [binding] candidates. If you are interrogating individual proteins, quite frankly, all you do is pull out a chip, and your probe, and you are done. It’s just so much more rapid than two-hybrid, for example. You can explore biochemical functions just as fast, you are only limited by your assays.

 

What are the advantages of your microwell arrays?

There are certain kinds of assays that are just better done in a well format because they involve several components. Not that you can’t do them on a surface, but you can control the environment a lot better. Those wells we used initially, they are 300 nanoliter volumes, so you really don’t need very much material; they also reduce evaporation. You can do multi-component reactions in those and keep them segregated from neighboring wells. It’s really nice for [kinase] inhibitor studies. We think it’ll be nice for certain small-molecule studies as well, where you could incubate small molecules with the wells in these compartments, wash them off and then, using mass spec, elute off the bound small molecules and find out what they are.

What is special about your yeast proteome chip?

When I go to these proteomics meetings, a lot of these companies say, ‘We spend all this time on making nice surface technology and arrays.’ But that’s not the rate-limiting step in the whole protein chip business. We figured that out in a week, by just testing lots of different conditions. The hard part was making a high-quality expression library, and that took a lot of time because getting the clones sequence-verified [and] making sure they are fused properly by sequencing across the junction takes time, and it’s not that cheap either. There were three things you needed to be able to do to accomplish this feat: The first was the high-quality expression library, the second was procedures for making lots of proteins at once, so we had to set up high-throughput protein production procedures. And the third is the array technology itself. The biggest thing about our protein chips is that the assays are exquisitely sensitive. We are putting down 10 million molecules in a typical spot, but we only need 1,000 to see a positive signal. If you have 99.9 percent of your material dead, you still see it.

The Scan

Missed Early Cases

A retrospective analysis of blood samples suggests early SARS-CoV-2 infections may have been missed in the US, the New York Times reports.

Limited Journal Editor Diversity

A survey finds low diversity among scientific and medical journal editors, according to The Scientist.

How Much of a Threat?

Science writes that need for a provision aimed at shoring up genomic data security within a new US bill is being questioned.

PNAS Papers on Historic Helicobacter Spread, Brain Development, C. difficile RNAs

In PNAS this week: Helicobacter genetic diversity gives insight into human migrations, gene expression patterns of brain development, and more.