Name: Ruedi Aebersold
Position: Professor of molecular systems biology at the Federal Technical University in Zurich and the University of Zurich, 2004 to present; faculty member at the Institute for Systems Biology, Seattle, Wash., 2000 to present
Background: Co-founder, Institute for Systems Biology; professor and associate professor of molecular biotechnology, University of Washington, Seattle, 1993 – 2000; assistant professor of biochemistry, University of British Columbia, Vancouver, Canada, 1988-1993.
Postdoc, California Institute of Technology, 1984-1988.
PhD, cellular biology, University of Basel, Switzerland, 1983.
SIENA, Italy — Ruedi Aebersold spoke with ProteoMonitor nearly two years ago about his days as a postdoc student and how he got into the proteomics field [PM 09-03-04]. This week, at the 7th Siena conference on proteomics, Aebersold spoke about the lack of progress in the field (see story, this issue). While some studies have resulted in interesting observations, he said, proteomics has not proven to be very useful in the study of systems biology. He went on to talk about the need to create a PeptideAtlas that would help validate the quality of data and the need for targeted quantitative analysis of information-rich peptides.
ProteoMonitor caught up with Aebersold after his talk to discuss how he thinks research in the field is going, what he sees for the future, and his recent appointment to the scientific advisory board of Rosetta Biosoftware.
During your talk this morning, it sounded as if you were saying that a lot of little things have been accomplished in proteomics, but in the big, global picture, not much has happened.
The reason is that anytime anyone starts a program, we don’t know what’s happened before. [There’s] no prior information about the program; we just start measuring. As an analogy, if you want to go from New York to Siena, you’re not just going to wander off in some direction with the hope that eventually you’ll find Siena. You will use a map and maybe a timetable and these timetables and especially maps are based on prior information that people have used before.
They describe how things work, where the connections are. And that greatly facilitates getting where you need to go. In proteomics, if you can factor in what has previously been discovered about the proteome in your current analysis, then you have a much easier time because you can build on what others maybe have done before. So this shotgun approach of just walking in and start measuring is actually very successful on some levels, but it stalls at the level of maybe 1,000 to 2,000 proteins unless you do an extraordinary effort. And that means only a fraction of the proteomes have been discovered. So the idea is to first map it out, then learn from all these prior experiments and then devise some more strategic and clever assays or measurements.
It seems like there’s been a lot of work done. Is much of that work not useable?
There’s a lot of information out there. It’s not useable just as such, so you have to do a lot of processing of the data to get a coherent picture. Obviously, it does not help you if you buy a map that is essentially wrong, [if] it has [something] like 50 percent false positives in there. So you have to make sure everything is consistently analyzed. And then the information becomes extremely useful.
It’s not useful because the standards for research and methods are not there?
Yeah, there are different formats, different levels of different thresholds. Data is not shared enough, maybe experiments are not well described. But there’s a lot of information out there and if you put [some reporting standards together] you’d make a lot of headway towards mapping out the proteins.
That’s what we’ve done [at ISB]. We’ve built an informatics structure which basically can connect to every laboratory running any kind of mass spectrometer and then the data is analyzed exactly through the same pipeline. So you’re comparing apples to apples and not apples to oranges.
Of course you can go through every journal paper that has been published and say, ‘Well these are the proteins the authors identified’ and take them at face value and just accumulate them all. And that wouldn’t work because there are very different error rates associated with each study.
HUPO and some other groups are trying to create reporting standards. Would it make more sense to create the standards first and then create the PeptideAtlas [a publicly available compendium of peptides identified by tandem MS, available at http://www.peptideatlas.org/] that you were speaking about?
Creating the standards is a necessary prerequisite for making such a map because people need to be able to communicate with it and that leads to protocols and standards, but it’s only part of it. We [ISB] started making these standards or formats, or plans for formats, a while back. And then HUPO supported them [but then they started to create their own standards] and now they’re going to be merged [See PM 06-08-06 for further details on the integration between HUPO/PSI’s mzData and ISB’s mzXML].
The last meeting here in Siena was two years ago. What has changed since then? What’s moved forward or do you think the field is at the same stage?
It certainly has progressed. [One] of the important advances, I think, in the last two years is the idea that measurements should be inherently quantitative, [which] has been very, very widely accepted. So … we’re looking for differences, not just lists. Consequently, a number of methods have been developed that support quantitation. So that is not a new concept, but it certainly has strongly penetrated [the field].
Then there has been a notable success in analyzing certain subproteins. So equally in some fields [such as] analyzing protein complexes, [the] interaction approach, there has been a fair amount of progress. Analyzing mitochondria or chromosomes, very simple structures, there has been a fair amount of progress. There’s been a lot of progress in instrumentation, there are now much more precise and robust mass spectrometers than even two years ago.
And the direction that I talked about this morning, there hasn’t been too much movement, and it’s been basically our group that has pushed [ideas] that have seen a lot of increased acceptance, principles such as you first map the space out and then do target analysis. That’s especially well-received in biomarker research where you try to weed out proteomic signatures for serum plasma. There, if you go with the shotgun approach, let’s say like discovery first, you don’t accomplish much. It’s so enormously complex and bogs down very fast. So by going for some targeted measurements, it’s much more promising.
Is that the widespread practice now, to do targeted analysis?
It’s not so widespread, but [that’s] because the methods are just coming out. Conceptually, I think it’s widely accepted, but technologically it’s just becoming feasible. So I think that if you sit here again in two years, that will be one direction proteomics will be going. I’m pretty certain.
What about five years ago, how would you compare where we are to where we were?
I think we’re still looking for the same goals. And they have not been reached, but there has been steady progress. I mean these are complicated problems. [One of the goals] was to compare the proteomes from various cells and cells in various states. That is something that people probably would have supported five years ago and they still support it today, but they have not been reached. It’s been difficult to reach for reasons we discussed.
I think a goal that has been around for a very long time — and there’s increased optimism and there have been some hiccups and some very high expectations and we crashed — is in biomarkers. The question is: How can you weed out misinformation? And that’s an old question and [there have been] various attempts to solve it, [but] technically nothing really works.
Now I think reality has sunk in and it’s a very important goal, a difficult one to achieve, and now there are much more systematic studies or attempts, so that has been a field of steady progress.
What is different for you now that you’re based in Switzerland?
The environment, of course, is different. One of the advantages in Europe is that you get steady core funding to the laboratory. It’s not grant money. It’s basically an account that you get from the school and that means … that you can do long-term or riskier projects.
What about collaboration between academia and companies? Is there a different atmosphere in Europe?
I think people are a little more relaxed in Europe about conflicts of interest and things like that. In the US it’s gotten very, very difficult, especially if you are in a public university, because everyone is afraid of being blamed for being in a conflict of interest. [In Europe] you have to show you’re not misusing state funds for directly supporting the companies, but I think generally it’s more relaxed.
Tell me about your role at Rosetta Biosoftware. Why did you decide to accept their offer? I would think you get your fair share of proposals to be on advisory boards.
One key reason [I accepted Rosetta’s offer] is I’ve known Rosetta and the people in it for a long time. And they’re a very good group.
Secondly we have at the ISB and also in Zurich now, over the last five years, invested a lot of effort in developing tools, computer tools, to analyze proteomic data coming from mass spectrometry. So there’s a whole suite of tools that have been generated and they’re assembled into a whole pipeline which we call the Trans-Proteomic Pipeline.
So now we have a whole range of tools out there that people use and we want to develop new ones because things move ahead, so now we have the problem of maintaining the old tools and developing new ones. Eventually, you run out of speed because the more tools that you have to maintain, the less resources that you have to develop new things. It has been in our interest for a long time to see whether some of the private-sector companies in the informatics field will take these old resource tools and build them into their own [portfolios] and then support them while they’re still available as open source from the ISB and maybe from others.
So Rosetta is interested in doing that. In fact, some of their developers who were at ISB before developed these tools. It seemed like a good idea to be involved with Rosetta and to update them about what’s available … so that they will develop commercial software [that] supports the workflows that are commonly used.
The prime criteria [in my choosing which company to work with] is always whether what the company is doing is a good match with what the lab is doing because I hope that some things come back to our lab. As an academic lab, we are good at some things, other things we are not good at. For instance, we are not good at generating nice user interfaces or bringing tools together so they work seamlessly with each other. This is mainly software engineering tasks, which academic labs usually are not very good at. There are companies that are very good at that. So what we hope is some of the tools that are developed will come back to our lab and then we can use them.