Skip to main content
Premium Trial:

Request an Annual Quote

RTI s Jim Stephenson on IPG Strips and the Reliability of Mass Spec Data

Premium
Jim Stephenson
Senior program director, Mass Spectrometry Research Program
Research Triangle Institute

At A Glance

Name: Jim Stephenson

Position: Senior program director, Mass Spectrometry Research Program, Research Triangle Institute, since 2001.

Background: Research staff member, Oak Ridge National Laboratory, 1997-2001; Postdoc 1995-1997.

Research Assistant, University of Florida, 1990-1995.

Applications development chemist, field engineer, Thermo Finnigan, 1987-1990.

PhD in analytical chemistry, University of Florida, 1995.


At the Cambridge Healthtech Institute's Biomarker Discovery Summit held last week in Philadelphia, Jim Stephenson gave a talk on 'Top-Down and Bottom-Up Proteomics: From Identification to Validation.' ProteoMonitor caught up with Stephenson before his talk to find out about his background and the technology that he is developing.

How did you get into both top-down and bottom-up proteomics?

My background is primarily in instrumentation — I was in Rick Yost's group in Florida doing ion trap work. Before that, I worked at what is now Thermo Finnigan doing ion trap work. I left Florida for Oak Ridge National Laboratory and worked with Scott McLuckey. That was in 1995-1997, as a postdoc, and 1997-2001 as a research scientist.

I primarily worked on laying the foundation for all the ion-ion, top-down proteomic work that you see coming out of Don Hunt's lab now. We did all the proton transfer and ion-ion work on 3D traps when I was there. That's how I got started in that. We built up a biological research core at Oak Ridge to do instrumentation development as well as basic proteomic work.

The biggest challenge — there were a lot of challenges — was fragmenting intact proteins using a low res instrument, reducing the charge state to +1, and extending the mass range of the ion trap. We worked out some really interesting methodology to be able to do that. There were a lot of challenges when we first started that work. Just understanding how ion-ion interactions worked, and what we could do with them from a practical standpoint took us quite a while.

At that time, while we were working on that, a technique called ECD invented by Zubarov and co-workers — it's electronic capture of intact proteins — came along, and that is really primarily the big top-down technology right now.

Did your previous work at Thermo Finnigan help you to develop technology?

Not really. At that point I got a really good understanding of ion traps and how they worked, so that when I went to graduate school, I really understood how the technology worked, and it enabled me to do a lot more work that somebody coming away from just undergraduate school. It gave me a big advantage.

So you've continued to do top-down proteomics at the Research Triangle Institute?

A little bit. We've been mainly bottom-up focused, but we're going to be moving back into the top-down area. No matter what the technique you use, whether you're using top-down or bottom-up, you need to be sure that what you think you're identifying is really what you are identifying. In other words, you need to look at large datasets that you can generate with off-the-shelf instruments, and find the real IDs and separate them from the false positives, and mine the data for false negatives. That's a huge issue. In top-down, that problem really doesn't exist to a large extent because you're in a scenario where you're looking at intact proteins. With bottom-up, you're looking at a lot of smaller peptides that can be common to a lot of different proteins.

Some of the techniques that are out there these days, particularly mathematical algorithms to assign probabilities, sometimes don't predict the biology the way that it needs to come out. And you can get an elevated false positive rate that way. We spend a lot of time thinking about how to get the best quality data out of these large datasets you can generate with instrumentation these days.

When did you start working on bottom-up proteomics?

When I moved out here in 2001.

Why did you decide to switch to bottom-up?

Well we started a group here from scratch at Research Triangle Institute, and the instrumentation that you would need to do bottom-up is much more readily available. From a cost standpoint, it's much easier to start out with a couple of ion traps and maybe a MALDI instrument than it is to buy large-scale ICR equipment, which is the mainstay of top-down proteomics.

What we've developed here is a way of taking peptide digests, putting them on an IPG strip, so that things are separated based on their PI. You can predict PI easily to within approximately .05 PI units or less, depending on the type of algorithm you use, which means now you have an orthogonal experimental parameter to your MS/MS data that you can predict reliably. If you have that, you can filter your data based on that approach, and that's something that we've published. What you're doing by doing that is taking the guesswork out of the statistical approaches. A lot of the time, probability and statistical approaches can't do things like look at the redundancy of amino acids. They have difficulty dealing with amino acid frequencies of specific organisms.

So with this approach, there's a lower probability of false positives and false negatives?

Yes. You can actually go in and mine data. A lot of people, depending on the scoring algorithms that you use — whether it's Mascot, or whether it's Sequest — what this gives you is an experimental way to set the cut-off. You can go in and say, 'I know I want to draw the line right here, because I have an orthogonal technique.' If the peptides I find via my MS/MS search don't fit within the defined PI range, it's a false positive.

Other people have done this successfully, though with not as much resolution, using liquid-based IEF separations. A good example would be Tim Griffin up at the University of Minnesota.

We decided to use IPG strips for several reasons. One, PI algorithms were originally designed to work with IPG strips. We started working with that — it's something that everyone has in their lab. We used it for shotgun proteomics originally as opposed to doing SCX separation. There's a wide variety of other advantages of the technique over traditional SCX.

What kind of applications are you using this kind of technology for?

We're using the technology for several different applications. We are looking at a project with some collaborators at Duke University, looking at pulmonary disease and explanted lung tissue. We are using the technology on a project for the National Institute of Allergy and Infectious Diseases, looking at studying innate and adaptive immunity for cholera and typhoid vaccine trials. We're using it for other biomarker-type discovery work as well.

We've got some proposals in to look at this for exposure-type research. So toxicogenomic/proteomic research.

Are you getting back into top-down proteomics?

We'll be getting into top-down proteomics a little more. Because of the collaborations and non-disclosure agreements, I really can't chat about it in that much detail right now. It is very similar to some of the work coming out of Don Hunt's lab at the University of Virginia right now on what's known as ETD electron transfer.

What plans do you have for the future? Do you have any plans to commercialize your technology?

We'll probably continue to develop the technology and the approaches, but we're expanding more now into the applications.

Commercialization is already in the works, but I can't tell you on the record about the details. I would say we expect something to be released within the next six months.

We've also looked at some techniques for post-translational modifications of putting peptides on the strip, separating them — a lot of times this will allow us to separate modified from unmodified peptides. By separating things on PI, you improve their ionization efficiency greatly. So a lot of the effects you see with PTM modifying charge groups and reducing ionization efficiency sort of go away. We've done some preliminary experiments, just quick and dirty, but it's something that we don't have the personnel or all the resources to chase down right now.

We've also been working on improving PI prediction algorithms as well. Most of the software that we have has been written in house.

I guess our strength is in the diversity of our group, in terms of people's backgrounds. I have a young group with a lot of very talented people who bring a wide variety of expertise to the table that really has driven the research.

File Attachments