AT A GLANCE
Name: Ruedi Aebersold
Position: Co-Founder and Professor, Institute for Systems Biology, Seattle
Prior Experience: Developed the electroblotting technique for analyzing small amounts of peptides for chemical sequencing, and the ICAT reagent technology for high-throughput quantification of proteins
Ruedi Aebersold may not have been the first to dream of one day studying mass amounts of proteins simultaneously — he credits others such as Julio Celis, Leigh Anderson, and Jim Garrels with having similar aspirations — but his stewardship of the ICAT reagent technology may represent one of the largest theoretical contributions to date towards making that dream reality.
But in his own view, Aebersold sees this contribution in a more mundane light. “I view [the development of ICAT and other techniques] as a long-term sidetrack,” he said. “We’re really not in it for the sake of the methods; what we’d really like to do is measure what’s going on in the cell at the protein level.”
That viewpoint makes sense considering Aebersold’s background in basic biology. As a graduate student in his native Switzerland, in the Biozentrum at the University of Basel, he sequenced a panel of monoclonal antibodies and studied how they recognized a specific antigen, with the aim of gaining further insight into immune response.
With Hood, Speeding Up Edman Degradation
Aebersold’s graduate research in Basel put him in contact with Lee Hood, who ran a lab at CalTech during the 1980s developing technology for sequencing DNA and its corresponding proteins. At the time, protein chemists used Edman degradation to identify the constituent amino acids of a protein — a relatively slow chemical method that required ripping off amino acids one at a time — and Aebersold joined Hood’s lab at CalTech to try to find ways of speeding up the cumbersome process and making it more sensitive.
He met some success with his first attempts at identifying proteins taken from 2D gel spots, by electroblotting them on an inert surface, digesting them with an enzyme, recovering and separating the peptides and sequencing them by the Edman degradation, culminating in a JBC paper published in 1986. But the advent of new ionization techniques for mass spectrometry, combined with genome databases, allowed Aebersold’s techniques to spread contagiously. “Somewhat by coincidence, the methods to generate peptide fragments from small amounts of 2D gel separated proteins were really compatible with the emerging types of instruments” that used ionization techniques such as MALDI and ESI, he said.
In 1988, Aebersold took a teaching position at the University of British Columbia in Vancouver, where he spent five years honing the electroblotting technique and other methods for protein analysis. However, what Aebersold really wanted was a way for not only identifying but also measuring the amount of proteins present in a sample. “We realized that if we wanted to make high-throughput techniques useful for proteomics, we needed to make them quantitative,” he said.
At UW, ICAT has No Room to Expand
When Hood, Aebersold’s former advisor, left CalTech in 1991 to start the department of molecular biotechnology at the University of Washington, he recruited Aebersold to join the new department starting in 1993. There, Aebersold got the idea of trying to modify for use with proteins a traditional mass spectrometry technique for attaching stable isotope labels to quantify how much of a molecule is present in a sample. Together with Mike Gelb, a chemist at UW, Aebersold devised a method for measuring the precise concentration relative to a standard of peptides in a sample, and then using computer algorithms to piece together what protein the peptides represented. The stable isotope tags were introduced into the proteins via a reagent, which they named ICAT, for istopically-coded affinity tag. The reagents bound with every cysteine residue present in a mixture of peptides.
Aebersold credits the multidisciplinary environment of Hood’s department with the means for assembling the pieces of his ICAT reagent technology. “You cannot do all of these things in your group, so [working in a department such as Hood’s at UW] makes it very efficient,” he said. Unfortunately for UW, the university couldn’t provide resources fast enough to keep Aebersold and some of his colleagues in the department. UW had a three to five year time-frame for expanding the department, Aebersold said, and he wanted to immediately begin exploiting the ICAT reagent technique on a grand scale in cell biology. “We were going to be left out of the interesting part,” he said.
Aebersold, Hood, and others initially began discussions with investors about starting their own company, but soon they realized that the venture capitalists wanted to focus only on one technology with near-term economic value. Aebersold, Hood, and Alan Aderem, another former UW researcher, enamored with the idea of working with a variety of experts and devoting more than one technique to a problem, decided to go out on their own, “at the risk of falling flat on our noses,” Aebersold said.
Moving to Systems, New Software Tools
Thus, the Institute for Systems Biology, Aebersold’s current home, was born. Since 2000, Aebersold’s group has worked on combining his proteomics research with other high-throughput techniques for studying cell biology, such as cDNA microarrays and computational biology, in an effort to eventually develop models for cellular behavior. The institute has yet to build its own endowment, but has collected enough grants and research contracts to support 170 employees. “By and large, it’s been great,” he said.
In proteomics, Aebersold has continued to refine the ICAT reagent technique, most recently trying to build expertise in chemistry to find ways of modifying the reagents to make them specific to post-translationally modified proteins and other functional classes. In addition, the institute is working to recruit mathematicians, physicists, and other analytical scientists to develop bioinformatics software for finding patterns that make identifying proteins faster, and algorithms for quantifying the accuracy of a particular machine-produced result.
“We have to build entirely new software tools which assign probabilities to each database search or each computer-generated result so we know if that result is correct with 99 percent probability or if it’s a low quality result,” he said. “This is a crucial bottleneck in this whole proteomics field.”
Aebersold is also a fan of using quantitative proteomics as a means to understanding how biological networks interact, rather than simply cataloging the different types of proteins present in a particularl cell state. “Biologists are mostly interested in change,” he said, “and that again leads to the issue that we need to be able to make measurements in a quantitative fashion.” A catalog, he added, “is not sufficient to explain biology or disease.”
“The way we think here [at ISB] is that proteomic and genomic tools are increasingly not tools to establish a reference map or a catalog, but that they are research tools to answer physiological or mechanistic questions,” he said. “In that sense they need to interface with a wide variety of biological research projects and that’s why we believe we need these high-throughput facilities.”