At A Glance
Name: Eric Peters
Position: Group leader, Protein Profiling Group, Genomics Institute of the Novartis Research Foundation, San Diego, since GNF’s inception in 1999 (see PM 1-20-03).
Background: MS in polymer chemistry, Cornell University, 1996.
BS in chemistry, Georgetown University, 1989.
How did you get involved with proteomics?
I had done my PhD work in polymer chemistry and a lot of it was geared toward the development of macroporous polymer supports. Those have many applications — you see a lot of these things in the monolithic stationary phases now for doing separations. As part of the research, you get to look in the analytical journals and see the applications, and certainly proteins and peptides were things that people were interested in. Pete Schultz [director of GNF] was looking for a person with this type of background, and it evolved from there.
What are you working on now?
You’ve got to understand things in the context of GNF. We’re this weird academic-industrial hybrid. And so our group is spread over three areas: one is providing service to biological discovery projects going on here, so if people have immediate questions, we spend about a third of our time on that. Another third of our time is going to the other extreme and building platforms to do more efficient, high-throughput proteomics. And that involves everything from instrumentation development and design, to integration of systems together. So you see what’s attractive to you, and then you look and see if the things exist that allow you to do it, and if they don’t, you produce them. And then in the middle is the compromise between the two, and that’s using traditional methodologies and commercially available instruments to start looking at things on a broader scale — across multiple proteins at one time.
Tell me about your work creating a new platform technology.
In the first year or two [of GNF’s existence] it was about ‘how do you industrialize a lot of processes’ — not just how you do 100, but how do you do 1,000, or a million. That philosophy was moving everyone, and certainly they wanted to take a look at it in proteomics. The ideal was that you needed to see how different parts could come together so that the whole is greater than the sum of the parts in the system.
The two major things that we started from were a MALDI-based platform to get rid of a lot of the timing [in]compatibilities that exist in the electrospray instruments and at the same time a high mass accur-acy platform — this is extremely high mass accuracy measurements, and not just on a standard sample, but consistently on real-world samples. [We wanted] to take advantage of both of those in the system, and at the time, that roughly meant high-throughput MALDI FT-MS, and that didn’t exist at the time, and so it was ‘how do you put those components together?’ and more importantly, ‘how do you do it from multiple columns at once, consistently, with matrix addition, so you’re confident you’re going to get reproducible results.’ It’s a high-throughput issue in terms of how much information you get per analysis. The question is, how can you do it so, in the least number of times, you can move toward the greatest number of identifications?.
[The second thing] is to have the platform auto-mated so each of the parts is designed to work with the other. So if you have a MALDI-based system, MALDI has totally different ionization characteristics than electrospray, and each peptide flies differently, so we have to come up with labeling compounds that would basically allow you to react with lysine residues and allow them to fly equally well or better than arginine-containing species. But at the same time, you want that label to allow you to do differential quantitation, or to know the number of lysines that are contained in a molecule and to use that information with high mass accuracy to further increase your certainty or your number of identifications. Then you have the deposition system, and how to do it in an optimal manner, and how do you do the calibration on FT-MS in an expedient fashion. A lot of these things are coming out in papers — several of the components were just published in Analytical Chemistry.
So what processes are you going through to accomplish this optimization?
Some of it is actually building the instrumentation, so we build our own deposition system from an LC column onto a plate and [combine] everything to give you better reproducibility and better concentration. Another example is calibration on the FT-MS: So you have a high speed MALDI stage that is totally integrated into the software of the system, such that after the laser hits the spot, the ions are transferred down and they are held in an accumulation hexapole, and immediately thereafter the stage rolls to the side of the plate and hits a calibrant spot where those then go down and they’re all mixed together in the hexapole, and then they are sent down into the detector in one packet. That way you are internally calibrating every spot without ever adding calibrants or wasting a lot of space on your plate.
Do you start with commercial instruments or build your own?
The FT-ICR we refer to as a highly modified instrument, meaning there’s nothing left of the original. I think that’s something that most FT-ICR groups don’t put in the front of their papers — the fact that they are highly modified instruments. You don’t just buy these things from the manufacturer as is, and start plowing away. I think that’s changing — the [Finnigan LTQ FT-MS] hybrid instrument that came out from Thermo Finnigan is the first that’s kind of user-friendly to some extent and can be run in a somewhat walk-up fashion. But a lot of the FT-ICR papers that come out use highly-specialized instruments that [are] not what’s delivered to your lab.
Tell me about working in an academic-industry hybrid.
It’s still an experiment. We spend some of our time being told, ‘this is what you will be doing,’ but we also have a lot of freedom in terms of exploring these systems. When we said we want to build a MALDI FT-ICR platform, that was not a six-month project. So you can do things long term, but certainly you have an eye out to commercializing some of this technology. We have one deal licensing a labeling technology from the lab to Agilent, and we’re exploring some other technologies that were developed in the course of our work.
What is GNF’s relationship to Novartis Pharma?
That’s one that everyone debates every day. We are not part of Novartis Pharma. The Swiss have invented a third category — a for-profit charity. [The company,] Novartis, gives a lot of money annually to the Novartis Research Foundation, and the foundation decides how to spread that money out. Technically Novartis does not say ‘do this, do that.’ And yet at the same time, they’re definitely interested in what’s going on. They have the first right of refusal on any patent, so if a drug is discovered they’re certainly going to take that immediately.
What are the big improvements that need to be made in the future of proteomics?
I think there has to be some work on instrumentation — people are adapting the experiments they’re doing to the instrumentation that’s available. Mass accuracy is something that can really make a difference if you can get consistently high 1 ppm accuracy regardless of the samples. The upfront methodologies have a lot [to improve] also — but you have to choose your battle.
In terms of mass accuracy, you’re always going to have to do some tandem MS, but if you have extremely high mass accuracy measurements — 1 ppm or smaller — and in some cases you have information as to perhaps amino acid content — not where they are in the sequence, but that you have this many lysines or something along those lines, then it’s possible to take a single MS measurement and be able to identify absolutely the protein it came from without doing tandem MS.
Then there’s an idea first put forth by Dick Smith’s group [at Pacific Northwest National Laboratories in Seattle]. If you need six or seven amino acids in a row by sequence when you’re doing tandem MS to say unequivocally, ‘it’s this protein,’ then by the same token, if you can get the mass accurate enough and perhaps other information, then you would again be able to say, ‘it’s from this protein.’ Or you could at least greatly simplify the amount of times you need to do something. For example, you [can] go in and get a measurement, iden-tify the protein by any means, and then say, ‘these are all the peptides I expect to get from that protein, and if I ever see them again anywhere else in my analysis, I just throw them away and move on.’ If your mass accuracy is normal level, you’ve quickly thrown your whole sample away. But if it’s extremely good, you have a lot of chances to catch [discarded proteins], and you can start moving faster and faster through the system.
But those are all potential applications in the future when you’re pushing it to its extreme. Short term, high mass accuracy has a huge benefit in terms of confidence in the assignment you make from the beginning. You have all of these one-hit wonders in tandem MS where you just get one peptide out of a huge thing, and cer-tainly some of those are legitimate, but even when you go in and manually evaluate them, mistakes can be made. To have high mass accuracy measurements changes your confidence level so much — you can do experiments much more rapidly with no need to validate, and you’re much more sure of what you have. Mass accuracy changes what the major hit is compared to the next possible hit.