Name: Matthias Mann
Position: Professor, center director, department of proteomics and signal transduction, Max Planck Institute for Biochemistry, 2005 to present
Background: Professor of bioinformatics, department of biochemistry and molecular biology, University of Southern Denmark, director of the Center for Experimental BioInformatics, 1998 to 2007; group leader of the protein and peptide group, European Molecular Biology Laboratory, 1992 to 1998
Matthias Mann, professor and center director of the Max Planck Institute for Biochemistry’s department of proteomics and signal transduction, addresses the 10-year, $1 billion Human Proteome Project of the Human Proteome Organization and his own work at Max Planck.
This installment is the second in a two-part interview with ProteoMonitor. Last week Mann talked about proteomics technology and clinical applications for proteomics.
Below is an edited transcript of the conversation.
It seems like a lot of the leaders in this field are really concentrating on the technology side. Is that a coincidence, or is that by design because you see that there are shortcomings in the technology that’s out there?
That’s an interesting question and the answer is that it is sociologically quite a different field from any other fields that also have a big technology component. And the reason is that a lot came from mass spectrometry, and mass spectrometry is its own scientific field.
People like me who grew up in mass spectrometry, for us that’s a legitimate occupation, to work on the technology, whereas biological work on something like microarray or for that matter DNA sequencing, is not a legitimate occupation.
You cannot be a professor of microarray detection, but you can be a professor of mass spectrometry. And that means people will develop it.
In microarrays, for example, they will develop it so that it works, and companies will develop it because they want to make money, but there’s not a community of researchers who are also developing it. If you take the Broad Institute, they use it a lot and they will work on the software side, but they don’t make new and better chips every year, so that’s completely different from our field where we work hands-on very deep in the technology.
Also … there are few professors of HPLC, but the mass spectrometer has a long history of research and it attracts a lot of good people. That is unfortunately not the case in other instrumentation fields in biology.
So it’s a little bit of an exception in imaging, so we now have the big breakthrough with very high-resolution imaging technologies and that is driven by physicists.
Usually there’s too little invested in the basic technology development, in my view.
That’s interesting. I’ve wondered why protein microarrays have not been able to really break through and be used more in proteomics.
Well, it’s always difficult to distinguish there because … if you work in a certain area, that area has a certain potential that you can then develop or not. But if there isn’t that potential, then you also can’t develop it regardless of how interested [in that field] or talented you are.
I think the protein arrays have a number of fundamental limitations, the main one being these antibodies have cross reactions. There’s no way around it. And that’s why we’ve been really lucky, people like me who work in mass-spec based technology. This technology, basically, has no limitations. If you look at the basic physical limitations … we still have so much room for improvement over many, many years.
And that’s been the case for the last 20 years, whereas another technology, there could be some theoretical limits and that’s it.
[ pagebreak ]
Mike Snyder at Yale probably wouldn’t agree with you.
No, I’m sure he would not agree, but he also would know that I would say that. But he is one of the few who is doing good work in protein arrays.
What’s your view of the Human Proteome Project? The original 10-year, $1 billion price tag, the gene-centric approach — do you think it was approached correctly, or do you think those trying to sell it, especially to funders, may have overstepped?
The answer is yes and no. Our field is really dependent on technology and the more you develop the technology, the more things you can do. That’s why it’s different from the Genome Project.
The Genome Project was like, OK we have to develop the technology to the degree that we can do what we want to do and it was very clear what this would have to do, it would have to sequence all the bases with a certain accuracy and certain speed and then we can do the Genome Project, and then we’re finished.
That’s completely different from proteomics. It can do many more different things, and part of the technology is common to all the different things that you can do. For example, making faster and more sensitive and more accurate mass spectrometers help in all the different fields that you can apply mass spec-based proteomics to.
So investing in that technology development was a very good thing, because even without doing a big proteome project, it already pays back for itself because it is immediately used in all kinds of biological projects where you find phosphorylation sites that help you in fundamental biology, so I think all that investment in better proteomic tools has paid [for] itself many times over.
And [HUPO's] plasma proteome [project] with 30 different groups, I think it wasn’t such a big hit because…the goal even isn’t very clear and it’s not the same kind of data-producing. I mean there hasn’t been any definitive catalog of plasma proteins and even if there was, by itself, it wouldn’t help you very much.
So there are different areas, one is the interaction proteome, meaning what all the proteins interact with. That’s definitely very useful, and that’s becoming technologically possible now. That’s one area where it makes sense to focus on, and say, ‘OK, let’s do it.’
And there are some efforts to do that, and I think that’s a very good one.
Then another one … [which] is happening kind of by itself is the phosphoproteome. A lot of signaling is mediated through phosphorylaton and this field will be revolutionized by proteomics techniques, whether they want to or not because we can now find and quantify all those sites.
The community actually can’t make sufficient use of it as we discussed before, but the data is there and that field will work on a very different basis from now on.
And the same is true of all kinds of other modifications such as ubiquitination, so that field is being turned upside down by proteomics and it would be even more the case for some of the modifications that are just as important but [which] can’t be studied now.
So that will be able to be studied with proteomics, and in come cases, [proteomics] will create large biological fields where they just didn’t have the tools, and for that reason there wasn’t a field, and now there will be a field.
There again, proteomics is more than paying for itself and you can imagine all these different modifications in different circumstances, so there will be some data-generation arm of this … and creating the tools for people to use them in very specific circumstances.
And then expression proteomics … is actually getting very competitive now. Once the accessibility issue is solved, it will be very competitive with microarrays. It will never have the same throughput probably, but on the other hand, the data is much more useable, I would say.
That can be used basically in all these situations where people were using microarrays before, and we’re also getting similar sensitivity. For better or worse, just when we’re achieving that, they are now moving to deep sequencing, so they’re kind of moving the goal post.
[ pagebreak ]
But that’s also good because now we can look at post-transcriptional regulations … and people are seeing more and more that things on the mRNA level or the RNA level in general are really, really complex. Nobody can make heads or tails of what’s happening. It’s much too complex, actually, for us to interpret at this point.
And that will make this expression proteomics very, very useful because then you can actually see what … whether it be on the mRNA level, microRNAs, what is actually the final effect on the proteome level. To have the technology applied on large-scale projects and on an individual basis will be another huge impact that proteomics will have.
As someone who works in the field, you can see where the technology can go and how it can be applied. But maybe for those who don’t work as intimately in proteomics, such as funders, they may not be able to see it in the same way. Was part of the problem the way it was being presented and sold?
You can’t just from the start pick the right groups and the right technology and say, ‘Go with that.’
It was the same with the Genome Project. They also had a phase where they funded very broadly and later they had things they wish they hadn’t funded. There was probably even more of than in [HPP], but some of that was also unavoidable.
I can get up on my soapbox and say, ‘You have to use these methods,’ but how can other people say that ‘He’s right,’ and then the next person is not right?
The field is converging on some technologies and has abandoned others, which a lot of us predicted, but others were also mistaken.
Again, if you see what’s coming out and you see how many people now use these technologies, not to make a human proteome, but to solve their specific problems, it’s more than paid [for] itself, and we will be able to attack these large projects as well.
Where do you stand on the endpoint issue? Some people are pushing for endpoints as a way to get funding, but others are saying, ‘Let’s concentrate on pure research,’ and it needs to be open-ended.
I think people are caught in a little bind there. Many people realize that in proteomics, it’s very multi-dimensional, [there’s] protein interactions, modifications on the proteins, all kinds of things, so the area of what we’re doing, groups like ours, is a lot in the tools, rather than in the data we produce.
And I think many people realize that, but then the problem is if you want to generate momentum for a big splash from NIH, like they did for the Genome Project, then that’s not going to happen if you say, ‘Yes, we want to improve the technology.’
Then [NIH] says, ‘Fine, but we won’t give you $100 million for that.’
And that’s the problem. And even when you have done that, it’s also not such a big deal as if you had said, ‘I want to do the human proteome and now I have done it.’
And so I think that’s a lot of the issue, that people can’t get excited about these incremental advances, that the journey is the reward instead of saying we want to get here, and that’s what we promise to do, and then we do it, and then it will be a big deal.
There are some exceptions, like this interactome, but even that, you can do not only once but many times in many different situations.
[ pagebreak ]
But where do you stand on that?
I think it’s going to be an ongoing, open-ended process. Some people will be very creative, and [funders] will be very creative once they get more familiar with the techniques and can really see what they can do, they will be very creative with how they apply them.
Even people like us cannot foresee how this will be best or exactly applied for stem cell questions. We have some ideas, but we are not so into those fields or different fields that we can say, ‘This community should use this for this.’
And that’s the nice thing about proteomics, it’s completely generic. You can use it for microbes, for plants, for stem cells. We can follow how stem cells get reprogrammed. You name any biological area, I can tell you how proteomics can probably be used in that area.
During the HUPO conference in Amsterdam, HUPO also said that it doesn't want governance over HPP.
Obviously, anybody is nervous about some central organization where the committee is saying what they should do or not do. That's one thing. The other thing is HUPO's function is more of a standards thing … and I was talking to Matthias Uhlen about this, and we think that HUPO should do a few things and it should do them well.
But it shouldn't and it can't really organize those projects because the money is coming from national sources, or the EU, for example, and HUPO cannot then say that US money will spent like that because that's what NIH says by itself, and you would not want HUPO to say otherwise.
You can have this coordination role and vision role and help to fundraise in different countries.
Who would take the lead roles, then? The NIHs and the EUs?
For example, NIH and the Wellcome Trust Fund have come together very successfully to beat Craig Venter or at least match him [on the Genome Project]. They can do these things, and they are very aware that they shouldn't duplicate what the EU is doing and so on.
And they also call people like me to their meetings, and I can say what we should do with proteomics, what's the best way forward. I can give them my opinion, so they are aware of these things and they will turn to HUPO for advice, as well.
But in the end, they are responsible [for where the money goes] and they literally can't [leave] those decisions to HUPO. And they never will do that. And the same [is true] for Wellcome, and the EU.
What are you working on these days? What is piquing your interest?
It's such a good time now with the technology. It can do so many things, so the problem is not to find something interesting but that we have way too many interesting things.
I have a large group now, 50 people about, and they're working on anything you can imagine that you can do with proteomics.
But the things that I'm very excited about are … the interaction proteome because now together with Tony Hyman [at the Max Planck Institute of Molecular Cell Biology and Genetics] we have a very good technology where we can actually use some of the same approaches that people could only do in yeast cells before because they are genetically amenable and human cells are not.
But now we can actually do those same things in human cells.
And then we're trying to use this ability to get a deep quantitative proteome in a lot of different projects, so one that I'm quite excited about is [one] where we are looking at mitchondria of brown fat cells versus white fat cells and we quantify them in vivo against each other in mouse. That's giving us very interesting functional insights into how they work, and the goal there is of course to make the white fat a little more like the brown fat, and that should help you to burn more calories, but it also has a lot of implications for aging.
And we do have some stem cell projects where we are trying to, like everybody else probably, look at what makes these inducible pluripotent stem cells tick and how we can make this process more efficient.
[ pagebreak ]
And we have several projects in large-scale modification mapping. For example, we are now very interested in acetylation and we think that's going to be as big a modification as some of the ones people study all the time.
Why acetylation versus ubiquitination or glycosylation, or some other PTM?
We are also working on ubiquitination, and other people are already working on that too. That's clear … it's already very important.
Acetylation, people haven't caught onto this yet, so they're studying it only for histones by and large, but it is much more widespread and we are following that. And with the SILAC, we can do quantitative beads, so we can exactly see what sites are regulated with H-tag inhibitors for example, that are now in the clinic.
So at least mechanistically we can have a clinical impact with low-dose drugs, and what an H-tag inhibitor can do. People are looking only at the histones, but they have much more widespread effects.
Same as kinase inhibitors, we can look with phosphoproteomics, where they are exactly exerting their effects aside from the pathways that you are targeting. That we are doing for more principle.
Any data you can share for the work you're doing with acetylation?
We can capitulate the changes in the histone that people have studied so far, but then we see it in all sorts of processes. Previously it was studied on histones and on p53, which is a molecule that has been studied so in depth that people have found eventually everything there.
But we also found that back, and then on tubulin, so those were the three categories, and we find many more effects on those three categories than previously known. Or in the case of histones, at least, we find that the His-tags don't work as broadband as people thought they did. They prefer certain sites over others.
And then we see that it's involved in a lot of processes where there's been almost no indication whatsoever, so it's heavily involved in splicing, for example. And to our knowledge of the literature, there's only a single report …that reports a single acetylation in the whole splicing apparatus.
And now we have hundreds. And we can see how they're regulated.
Have you seen any effects from the financial downturn on the research community, and are you concerned that there will be?
I'm in a lucky situation because at Max Planck we have long-term funding and we don't think that the government would choose to, especially in the downturn, to cut funding. I'm not worried about our research, but I have a large group and large turnover, and those people need to get jobs.
That's what we're worried about, and we have seen the first cases where companies have said, 'We have a hiring [freeze], we need to see how the situation develops.'
Clearly for young people trying to establish their own labs, it's not going to make things easier, and we hope of course, that this stimulus [proposal] that we also have in Europe, that part of that will go to science. But we don't know.