Skip to main content
Premium Trial:

Request an Annual Quote

Ruedi Aebersold on the Early Days of Electroblotting and Quantification

Premium

At A Glance:

Name: Ruedi Aebersold

Age: 49

Position: Co-founder, Institute for Systems Biology, Seattle, Wash., since 2000.

Plans to move to Zurich, Switzerland in November to become a professor of systems biology at the Federal Technical University in Switzerland.

Background: Professor and associate professor of molecular biotechnology, University of Washington, Seattle, 1993 - 2000.

Assistant professor of biochemistry, University of British Columbia, Vancouver, Canada, 1988-1993.

Postdoc, California Institute of Technology, 1984-1988.

PhD, cellular biology, University of Basel, Switzerland, received 1983.

 

What was the protein research field like when you were a postdoc at CalTech?

I was working with Leroy Hood’s group — that’s the group I joined as a postdoc. They had developed a quite substantially more sensitive protein sequencer, a so-called gas/liquid phase protein sequencer. So that was a real jump in sensitivity of the actual sequencing process, which at the time was a chemical degradation of the protein. And it was clear that while the instrument was more sensitive, the whole preparation of the sample into the instrument, the sample feeding, had not been developed at the same time. The sequencer was actually more sensitive than the sample preparation methods. At the time most everyone who was isolating a protein or purifying a protein for sequencing, they used the big, plastic Sephadex columns or other protein purification columns, and they simply were not compatible with the very small amounts of proteins these protein sequencers could now sequence. So my project was to basically adapt protein isolation to a small scale so that small amounts of protein could efficiently be purified in a way that they were compatible with the sequencing machine.

At the time, was there a lot of demand for protein sequencing?

Yes. They were doing a lot of protein sequencing. That was the time that gene cloning had been fully developed, so it was a very, very popular experiment to purify a protein that carried out a specific activity that some biologist might be interested in, like a growth factor or a protein kinase — whatever the case might be. Whenever a protein was purified, there was interest in cloning the corresponding gene. Of course at that time there was no genome project yet. And so the key to doing that was to generate stretches of protein sequences from the isolated protein and then to reverse translate these sequences into probes which could be used to screen the cDNA libraries and to pull out the corresponding cDNA.

How did you go about finding a way to isolate small amounts of purified protein for sequencing?

What was clearly apparent was that it was very straightforward to separate at very high resolution small amounts of proteins in gels, like in SDS gels, one-dimensional, two-dimensional gels, and so what we basically then tried to do, which ultimately worked out quite well, was to connect the separation power of gels to mass spectrometry. So we had to find ways to get the proteins which were separated in a way that we didn’t lose a lot and the proteins were not modified so that they could no longer be sequenced. And in that context we developed an electroblotting technique basically to do the transfer — to electrophoretically transfer the proteins out of the gel onto a solid support [that] could then directly be inserted into the sequencing machine for sequencing.

What did you work on next after developing the electroblotting technique?

Then I moved to the University of British Columbia in Vancouver. What we noticed then, when we isolated a lot of proteins [by using the electroblotting technique], was that many proteins could not be sequenced. The chemical sequencing method that was the standard at the time — the Edmund degradation — requires that the protein has a free amino group at the amino terminus. Any protein that is modified at its amino terminus is not open to this technique, and that turns out to be almost half the proteins. So the upshot was that many proteins were not really sequencable. So then we developed a technique where we would isolate these proteins in essentially the same way — separate proteins through gels and then isolate them through an electrotransfer membrane — but then we digested the proteins on this membrane. Then we could recover peptide fragments and we could sequence individual peptides. So that overcame the problem of this amino-terminal blocking. That protocol was published in 1987 or so.

When did you start working on quantification of proteins?

Ten years ago or so, the main mindset still was a kind of biochemical mindset — to isolate the protein to basically homogeneity, and then to sequence the protein and to clone the gene. That still hadn’t really changed. The big change occurred in the mid-90’s when the genome project started to become completed — first bacteria, then yeast, then of course the human and other eukaryotes. That really introduced a dramatic change in this whole protein analytical world because now rather than isolating a protein to homogeneity and sequencing it and cloning a gene, now we were becoming interested in identifying large numbers of proteins for the purpose of establishing profiles. Basically people were working on the protein equivalent to expression array methods. We can isolate complex samples from multiple individuals or cells under different states and compare their profiles. That requires, of course, that you need to do measurements on a large number of proteins and that you need to do these measurements quantitatively.

How did you go about doing the measurements quantitatively?

To do the measurements quantitatively we used a trick referred to as stable isotope dilution — that is an old trick to basically turn the mass spectrometer into a quantitative device. The general idea is if you have two molecules that are chemically the same, but one contains some heavy, stable isotopes and the other one does not, then these two molecules are still chemically the same, but they have a different mass. So the mass spectrometer can distinguish these two molecules because they do have different masses, but because they are chemically identical, the specific signal that each heavy and light form molecule produces is identical. So if you have the quantity of one molecule known, let’s say the heavy one, and use this as the reference, you can very accurately measure the quantity of the light one by simply comparing the signal intensities. That’s not a new insight. That approach has been used in quantitative mass spectrometry for small molecules in particular for a very long time. But what we tended to do was to find a way to introduce into proteins stable isotope signatures that would then be able to be used for quantitation. So we synthesized, in collaboration with a group led by Michael Gelb at the University of Washington, molecules that react with specific sites on a protein. The first reagent happened to be a cysteine residue, which is a readily reactive amino acid. At each one of these residues, the molecule that the reagent introduced, either a heavy or a light isotopic signature, could then be at the end read out by the mass spectrometer. So in that way we can take two protein samples containing many tens or hundreds or thousands of proteins and we can label them in part with an isotopic signature — proteins in one sample and then the other. And then we combine the samples, digest them, and then we analyze the peptide fragments in the mass spectrometer. Then we sequence the peptides and at the same time measure the signal intensities for the pairs of heavy and light peptides of the same sequence, and that allows us to calculate the abundance. That’s basically what we call this ICAT technique — Isotope Coded Affinity Tags. From each experiment we obtain effectively two answers — one is which proteins are present in the sample and the second answer is how much of each protein is expressed.

You’re moving from the Institute for Systems Biology back to Switzerland, your native country, in November. Why did you decide to make the move?

I’m originally from Switzerland, and a position was offered to me [that] is a great opportunity professionally and also it’s a possibility to go back to Switzerland. It’s a professor position at ETH, which is the Federal Technical University in Switzerland. This position is also a part of a cluster of other faculty positions that are being advertised in our field to build a systems biology initiative in Zurich, and it’s quite sizable in its scope. I have the chance to be there at the beginning and to work towards building this systems biology initiative that will be carried by both ETH where I’m going to and also the University of Zurich, a sizable university. It’s a very exciting project and it has a fairly sizable scope. It’s really a nice opportunity and something I couldn’t decline.