Every week, ProteoMonitor interviews leading researchers to learn what they are doing in proteomics and their views on where the field is going. Following are some edited highlights from the Proteomics Pioneers of 2003:
William Hancock, professor of chemistry, Barnett Institute, Northeastern University
(Feb. 17, 2003) - We all want to find biomarkers, but we need to understand the basic biology before we can race off and say ‘yes, we found the biomarker.’ That’s a big job, but fortunately, we have got a pretty large community, all working hard on that. I think the biggest stumbling block at this stage is to understand individual patient variability, and how that will affect finding good biomarkers that will be useful in general screening.
Joshua LaBaer, director, Institute of Proteomics, Harvard Medical School
(March 24, 2003) – There is a huge raging search right now for biomarkers, and I think eventually, that search will yield fruit. But I also think that it’s farther off than some people may think [due to] the number of validation studies that need to take place to confirm that these markers are real.
Richard Johnson, research scientist, Amgen
(April 4, 2003) – A lot of pharmaceutical companies have switched their proteomics efforts to biomarker discovery. Amgen is [also] thinking that might be an interesting thing to start looking into. Medically, if you did find biomarkers, it would be very interesting. If you have an interest in biology, I would say biomarkers are not particularly interesting because, what does it mean if you see a piece of haptoglobin clipped in some disease state? The likelihood of ever figuring out the chain of events that led to that is pretty slim.
Richard Caprioli, professor of biochemistry, Vanderbilt University School of Medicine
(Oct. 10, 2003) – Almost everything that happens to you winds up in your plasma – proteins break down, metabolites, toxins. Your plasma has a huge number of compounds from all different kinds of causes. So we thought it probably wasn’t the best approach to just go fishing in plasma and hope that you find something. Why don’t we go into the tumors, identify the biomarkers which are important in tumor growth or homeostasis. … Then, because of things like apoptosis, the cell’s going to break down and [the protein] is going to wind up in the blood. So why don’t we now go smart-fishing. It makes sense — we know they’re in the tumor, they’re going to have to show up in the blood at some level. The question is, will the level be high enough for us to detect?
On Top-Down vs. Bottom-Up
Gary Siuzdak, director, Center for Mass Spectrometry, Scripps Research Institute
(July 4, 2003) – Top-down, if you have a pure protein or a mixture that isn’t too complex, can be very powerful, especially if you combine that with things like electron capture dissociation. But the reality is that the best way to separate [complex protein mixtures] is to first digest them and separate out the individual peptides.
John Yates III, professor of cell biology, Scripps Research Institute
(Sept. 12, 2003) Top-down’s got some attractive features in that you’re potentially looking at the functional protein. The problems associated with it are that it’s not very straightforward to separate and enrich for every single protein of the cell. And even if you were able to get that far, the methods that people use to try to fragment these things are not particularly general. You can’t just take any old protein, stick it in and expect to get reliable information. [Also], this is mostly done on FT-MS and the problem with FT-MS is that it hasn’t been very easy to do those experiments. I think that’s improving, but it’s not like you could fashion a large-scale and high-throughput method centered on top-down at this point in time.
The tandem mass spectrometry process on peptides has been around for at least 20 years. So a lot of the mechanics and automation have been worked out, and it’s a generally reliable way to generate sequence information for peptides. The drawbacks are the complexity that you produce when you do the experiments — so when you digest the set of proteins you increase the number of peptides by a factor of at least 20 if not 50. And the informatics are pretty intensive to analyze all that data.
Brad Gibson, professor and director of chemistry, Buck Institute for Age Research
(Sept. 19, 2003) We’ve also tried to do some of the MudPIT analysis that John Yates has championed. I think that may not work for overly complex mixtures. I think it works really well if you have 50 to 100 proteins, but if you’ve got hundreds of proteins then you just keep revisiting the most abundant proteins, and it doesn’t have a lot of depth …
I think in modern proteomics, the shift is going to be toward separating things at the protein level and thinking about protein complexes, protein interactions, all that kind of stuff. I can see the shift among colleagues that you’ve got to go after the proteins.
On Protein Arrays
Daniel Knapp, professor of pharmacology, Medical University of South Carolina
(Sept. 26, 2003) – Ultimately, we’re going to reach the point where we know what proteins we’re interested in and how they change in a biological system, and there will probably be more time-efficient, cost-effective ways to do that in a routine manner, to actually apply proteomics to biological studies. It would be something other than mass spec, and a lot of people think the answer to that is microarrays, but then the question is, what do you put on a microarray to specifically capture proteins? The answer is not clear.
Tom Kodadek, professor of internal medicine and molecular biology, University of Texas, Southwestern Medical Center
(Oct. 31, 2003) – My colleagues and I have this vision that if we could develop these chips in a form that’s really cheap, we could put one of these units into at least every doctor’s office and maybe even your house, next to your toaster oven. The idea would be to sample your proteome on a daily basis, and use that as a diagnostic tool and be able to, in a pre-clinical fashion, catch when people are getting sick in various ways.
On Post-Translational Modifications
Daniel Liebler, professor of biochemistry and director of proteomics, Vanderbilt University Medical Center
(April 14, 2003) – For mapping protein modifications, the biggest problem is getting high-quality datasets that provide spectra of all the peptides in a mixture. You need a high degree of so-called sequence coverage. This is certainly made easier by tandem-LC approaches, but we still have a long way to go in that area in being able to acquire spectra of not only unmodified peptides but the modified forms. New instruments will certainly help us. The new generation of tandem mass spectrometers, whether it’s the MALDI-TOF/TOF, Q-TOFs, [or] the new ion trap FT instrument, will be the most useful for doing work on protein modifications. We also need to develop better affinity enrichment approaches to capture a subset of proteins that might be modified, for example. That will require some very creative chemistry and biochemistry.
Eric Phizicky, professor of biochemistry and biophysics, University of Rochester School of Medicine
(May 12, 2003) – You might have to build a library of maybe a million proteins. At the analytical end, [you need to find] very specific probes, so you could not just say that the concentration of this particular protein went up tenfold, but that it’s the phosphorylation of serine 32 that went up. For signal transduction pathways and many other processes where it’s the phosphorylation state or some other modification that changes, and only at particular residues, that becomes difficult. It’s also a problem for studying protein function, not just for anlaysis, [for example when] the protein you are interested in only has [a] function when it’s phosphorylated or methylated or acetylated.
On the Proteomics Market
Ron Beavis, adjunct professor and senior fellow, Institute for Biophysical Dynamics, University of Chicago.
(Feb. 24, 2003) – The thing to look for over the next few years is the emergence of a second wave of proteomics companies that are much more tightly focused than the first set of proteomics companies. There are a lot of people now, both on the business side and on the scientific side, who have had exposure to the technology. they understand it, they know what it can do. The next round of companies will be able to put together business models that more closely match the priorities of the larger companies they want to partner with. It means applying the technology in areas that are much more tightly focused and associated with specific therapeutics.
Gary Valaskovic, president and co-founder, New Objective.
(May 26, 2003) – Certainly proteomics is a market that continues to grow. It’s perhaps not the heady days of 2000 and 2001, where it was busting at the seams, but it’s still a very healthy growth. If you can compare proteomics to genomics, the analysis of the genome was a sprint; the analysis of the proteome, that’s a marathon.
Catherine Fenselau, professor of biochemistry, University of Maryland.
(Dec. 5, 2003) – I think we oversold it in the beginning, so there’s a lot of investor burnout. A lot of the startups have closed or downsized substantially. I think we need to back up and put it on a more intellectual basis.