Skip to main content
Premium Trial:

Request an Annual Quote

Max Planck's Matthias Mann Discusses Technology Development and Clinical Proteomics

Premium

Name: Matthias Mann
Position: Professor, center director, department of proteomics and signal transduction, Max Planck Institute for Biochemistry, 2005 to present
Background: Professor of bioinformatics, department of biochemistry and molecular biology, University of Southern Denmark, director of the Center for Experimental BioInformatics, 1998 to 2007; group leader of the protein and peptide group, European Molecular Biology Laboratory, 1992 to 1998

During the ABRF conference this week in Memphis, Tenn., Matthias Mann gave the plenary speech on proteomics and described work he is doing that combines mass-spec technology with computational proteomics techniques to characterize the yeast proteome.

Mann, who is center director of the Max Planck Institute for Biochemistry's department of proteomics and signal transduction, is considered a pioneer of proteomics research. Among the proteomics methods and technologies he has helped develop are peptide sequence tagging, nano-electrospray, and most recently stable isotope labeling with amino acids in cell culture for quantitative proteomics.

ProteoMonitor spoke with Mann at ABRF about the state of proteomics technology and research, his own work, and where he sees the field heading. Below is an edited version of the conversation, the first in a two-part installment. Part 2 will appear in next week's issue.

Why are you here? What do you hope to get from a meeting like ABRF?

I do hope to get the latest trends in instrumentation. We're, for example, interested in deep sequencing. I'm not necessarily interested in buying an instrument, but we do interact with those people and this is a place to catch up with the latest developments.

That's part of my job, to distribute what we've done by way of talks in addition to publications … and to interact with [my] colleagues. I got a really good idea from Matthias Uhlen [of the Human Protein Atlas] yesterday, so that was already worth the whole trip.

Is that a collaboration that you'll be doing with him?

We're already collaborating in an EU grant that I'm coordinating [called PROSPECTS] so we're already in the same network grant and we have direct collaborations.

Can you share what this idea is that he gave you?

No.

He's building up this huge resource with the antibodies … so if you want to know where the proteins are expressed, you can see that by antibody [-based technology], you can see that by quantitative proteomics. And in principle, they should give you the same [results], but image analysis is not always so quantitative, and antibodies are not always so quantitative, so it would be good to calibrate it against mass spec data, and that is what we're doing.

For a defined system, we're going to take the same cells and look at the images, how they look for the same proteins and then what are the SILAC ratios. In our case, you determine the SILAC ratios and see whether that correlates with what you see in the picture.

In some ways, we're trying to do similar things but with completely different technologies and they should agree, or they should be reasonable. But when they don't agree … that's interesting by way of quality control.

But of course, they give you different data. We get the precise ratios. You can see, for example, where in the cell the other proteins [are]. And we can also see that but you need to do additional fractionation, and it's from the imaging [methods].

So they're completely orthogonal and together they should give you much more than either one alone.

As you look at the landscape, how would you evaluate where proteomics is in terms of technology development?

It's good to be in this field because there's always excitement and you also think it's going to mature, but it keeps on going and I don't see it slowing down. What's happening is [that] for certain applications, the technology is good enough.

[ pagebreak ]

A number of years ago, you could identify similar bands quite routinely and core facilities can do that now, and it took a long time, but that's a done deal.

But large-scale quantitation is not a done deal. It will take still a number of years but it's going to come. And so it is for many things — phosphorylation is still in a fluid state. It will take a while because it migrates from research labs to more application-oriented labs.

But the technologies are in a very good state, I would say. For a number of years we had very high-resolution instruments that are also very robust. That really made a very big difference because previously you could have a very high resolution instrument, Fourier transform instruments have been there for decades actually, but they were research instruments really. I mean physics research, or chemistry research instruments, you couldn’t use them for biological work.

And now we have both very robust systems and very high resolution at the same time.

Is the bottleneck then on the technology side, or in the interpretation of the data?

It was in the software, and it still is in many ways, but at least for our lab now, we have solved that with this MaxQuant software. … Two years ago even, we would acquire a dataset for a week, let's say, and then we'd sit there for three months or six months [or] a year to really determine the ratios, manually validate them and so on. And now it's completely automatic.

Even before, [we had to] take the mass spectra manually, six, seven years ago, and that's, of course, completely automatic [now]. So in the same way with the analysis of the mass spec data, that's now completely automatic.

What has that meant in terms of the workflow?

We have this many instruments, so if we didn't have the automation of the software, we couldn't use many instruments. One would be too many because we would just have a big bottleneck. Every experiment would take three months to work through.

So that's absolutely essential, and it's now in a state where it's actually better than [what we could] ever do by hand.

So you're getting more data more quickly.

And more accurately.

Has that shown up in results that have some kind of efficacy, not just data that's kind of interesting to look at?

Yes. We are now moving downstream in the data analysis, so we have to do a lot of similar things to what the microarray people have been doing for many years. But the data is different; it's a lot more quantitative.

So in microarrays, if you have a certain fold change, oftentimes it doesn't mean very much. It probably just means that it's up, but the actual fold change doesn't mean so much. But what we found is if you look at a certain complex, then all the members will have the exact same fold change, so that means if you didn't even know, or if you just suspected that there were complexes, then now you have a very strong hint. That is one example of how I think the data will be much richer than what one is used to so far.

Actually in biology, there is very little data that is very accurate. It's basically only DNA sequencing that's very accurate, and all these other methods based on biomolecular recognitions or based on antibodies or based on hybridization, they do have certain limitations.

But mass specs are very digital … you can measure the mass extremely precisely now, so in principle, both the identification and quantification can be extremely accurate and I think will be.

And because you're measuring proteins … you will have ratios very accurately and the ratios mean something. That's something that people haven't caught onto yet, but it will be very, very useful. We've seen some examples of that now, but once people do it more routinely on a large scale, it will be a whole different dimension.

[ pagebreak ]

What has this ability to create all this data meant in terms of demands on the researcher? Have they had to come in with a whole new skill set or knowledge set now that maybe even just a few years ago wasn't necessary?

That is true. We don't collaborate so much but sometimes we do collaborate, and in one collaboration with a famous cancer researcher … we took his cancer lines with these point mutations in them and quantified … 5,000 phosphorylation sites, and then some of them changed.

And this person just couldn't deal with this: 'What should we do with this?'

Even though it was quantitative … he [proceeded] then to make three phosphate-specific antibodies. … And we completely wasted this whole database. It just meant the collaboration was not very fruitful because the person was not using even .1 percent of the data.

So they definitely need a new mindset: 'How can we look at the whole picture, not just the phosphorylation site that we want to know the function of, or the site of interesting proteins?'

They need to have a systems-wide thinking, and they don't have that.

Has that forced you to change the way you approach collaborations?

Yes. Again, we don't do that many collaborations, but we will in the future insist that they take a systems-wide view. Otherwise, we get these thousands of sites and we only get them confused, and then it's not a good thing to do.

Has all this technology development created a gap in terms of who can actually use all the information?

Absolutely. It's mainly the accessibility. That is actually one of the problems with this whole proteomics field: that we can publish some papers and give some talks, and people get excited. The next [thing] is they want to do it, but they can't because it's not accessible.

So even if their core facility could do it, they still can't practically do it because it would block the machine for several weeks.

We can get a pretty deep proteome in two days, but that took us three or four weeks before [to get that] same depth. So it is improving, and we have this goal of getting a complete proteome in one day, and then that will make it more accessible.

A complete proteome of what organism?

Of anything. Of a yeast cell or a human tissue, what have you. We still wouldn't get a complete proteome of plasma, because that's very, very difficult, but of a cellular system, let's say, or anything that people would otherwise measure by microarrays or deep sequencing. That is something we should be able to do in a short time.

This accessibility issue that you mention, does that affect the reproducibility of experiments?

Well, in the first instance, it just means that people can't use this technology because they don't have a place to do it. Then it's connected to reproducibility in the sense that maybe people can't do as many replicates as they want to do, that they should do.

Maybe it already takes two weeks, and if they did triplicates it would take six weeks, and that would be completely out of the question. That was the case for other technologies too when they were first starting, so I think that that will improve.

[ pagebreak ]

Funders say they want proteomics experiments to show some clinical utility. People in proteomics say that they have made some clinical in-roads, but funders don't seem to understand how hard this stuff is. How do you address that disconnect? People who are paying researchers like you to do the work are saying one thing, and you're saying something else.

Well, we should say that both parties are guilty. There were certainly a lot of proteomics people who [made] these clinical promises to create hype and to get grants. I should say that we never did that because I thought that the technology was a long way from being useful for things like plasma and so on.

But certainly a lot of people in the community did hype that even in 2001 when the technology definitely wasn't there.

So that said, we are again getting interested in clinical proteomics now. We typically do basic biology with our technology … now we are getting interested in clinical proteomics again. I've always been interested in it because it would be really, really useful if it worked. It's just that it didn't work.

But now I think with advancing technology there is a chance that it might work. I think it will take a couple of years, but actually I'm quite optimistic now that they can be very useful.

Have you advanced enough in basic biology and basic research with proteomics that you can seriously tackle clinical applications, or do you still think you're still a step away from that?

I think we're still a step away. The dynamic range of the cellular system is much less than that of plasma, so it was kind of unfortunate for the field that people tried to attack the most difficult question that there is and the most challenging question that there is technologically in proteomics with actually the worst technology that you can imagine.

At the time, I mean the SELDI and the 2D gels, they were really no match for this problem. That created a lot of backlash against proteomics.

What kind of clinical questions would it be appropriate now for proteomics to tackle?

You don't always have to look in plasma. The sensitivity is improving tremendously and … for example, we would like to look at needle biopsies, and then perhaps based on the proteomic pattern, one can then classify the patients similar to what people have been doing with microarrays.

And we can look at the profile of modifications in disease, which you can't do with any other technology, or not very well anyway. For example, we have already published a first proof-of-principle of looking at solid tumor in a mouse model, and then we identified 4,000 sites there, so that's a little bit behind basic cell line biology, but it's already respectable.

If that pans out, you could take, for example, a biopsy from a tumor and see what signaling pathways are turned and then after therapy you could see whether you've kind of shifted the signaling pathway to a more normal state …so that could help at least in learning more about the disease and perhaps also with diagnosis, but that's a couple of years away.

What about biomarker research?

Biomarkers can be a lot of different things, and the biomarker in blood for cancer is a huge challenge. If we could do it, it would be great. It's a huge challenge.

But perhaps out of tissue it's not so far-fetched and for some other diseases, such as diabetes where we're not looking for really low abundant things. There perhaps we're looking for an inflammatory state, and those proteins are usually within reach for a proteomics now, even other body fluids.