Scientific fellow, Advanced Research and Technology
At A Glance
Name: Darryl Pappin
Position: Scientific fellow, Advanced Research and Technology, Applied Biosystems, 2002- present.
Background: Professor of proteomics, Imperial College School of Medicine, London, 2000-2002.
Senior scientist, head of protein sequencing laboratory, Imperial Cancer Research Fund, London, 1990-2000.
Senior scientist, head of protein chemistry group, Millipore, 1987-1990.
Postdoc, Department of Biochemistry, University of Leeds, 1983-1987.
PhD in biochemistry, University of Leeds, 1983.
Darryl Pappin has been a leader in the development of iTRAQ reagents. Last week, he gave a talk at the American Society for Mass Spectrometry conference in Seattle about a new 8-plex iTRAQ reagent that Applied Biosystems plans to release later this year.
ProteoMonitor spoke with Pappin to find out about more about the 8-plex reagent, and the history of iTRAQ.
Can you give me some background about the history of iTRAQ and how you got into developing it?
We thought it up when I was in London. So for 30 years, I've been analyzing proteins. In the 80s, it was all Edman chemistry and that kind of stuff. When I took up the academic post at Imperial Cancer Research in London in 1990, it was kind of already apparent that mass spec methods, though they'd been quietly in the background all the way through the 80s, were just about to really break through.
So when I started a lab, we really started looking at mass spec-based approaches. There were two research threads I had going in the lab — one was on the chemistry side, and one was on the computational side. In the very early '90s — this is when people were just starting to get their hands on the first bench-top mass specs, as opposed to an instrument that was the size of a room, so mass specs were available to a wider audience.
We started working on these computer programs, and the first set of programs were these peptide mass-fingerprinting programs, which several groups published on almost simultaneously in 1993. That was the MOWSE program.
At the same time, we were getting mass specs into the lab, and we were actually starting to fragment peptides, and to try to figure out what the sequence of the peptide was from the mass spectra. In those days — this is around 1994, 95, 96 — everything was done by hand. You had to sit down with a piece of paper and a calculator and try to figure your way through the spectra and the sequence. That could take a long time.
On the chemistry side, we started playing around with derivation techniques. And they were directed at, 'Can we derive these peptides to make them fragment more simply so that we get simpler spectra, so that we can interpret them more easily?'
That's how it started. We started playing around with a lot of small-molecule chemical tags. Most of the time these were bases in nature because we found out pretty quickly that they had the greatest effect on fragmentation. We ended up with a couple of favorites that we kind of used in the lab all the time for this sequencing work.
But one of the chemists I had had been playing around with a lot of other structures, and out of this grew a lot of accumulated knowledge on the nature of these small chemical tagging molecules — what they did, their properties. Having seen Ruedi Aebersold's paper on ICAT in 1999, I started thinking about it, and it suddenly hit me that you could turn this problem into this isobaric nature — make everything the same mass, and I already knew how to do it.
There's a lot of stuff we were doing in the background for other things, and an acquired knowledge that was just sitting there, and it just required that little additional step of, 'Hey, I know how to make everything isobaric.'
What triggered you to think about making everything isobaric?
It was a lot of things. Ruedi's paper came out in 1999. We read it, and it wasn't of immediate impact to the work we were doing in the lab, because like most of the world, most of the quantitative proteomics we'd been doing up to that point had been based on 2D gels. And Ruedi's stuff was purely peptide based.
With 2D gels, you run the gels, you analyze them, and you do your quantitation on the gels, either by Coomassie staining, fluorescence, or whatever. And the gel tells you which proteins have changed — gone up, gone down, moved, disappeared. And you simply pick those spots out and try to identify them by mass spec.
In that world, the mass spec side of things is actually quite a simple problem. The gels tell you which proteins you want to look at.
If you flip that on its head and say, 'I want to do everything completely in the mass spec world' — so this sort of shotgun, peptide proteomics — the problem is the quanititation. I can easily do a Mudpit-type of experiment where I just digest everything and do a 2D peptide chromatographic separation and try to get the mass spec to identify everything I see. But the problem is, how do I get the quantitation?
Ruedi started thinking about this, and that's really how ICAT came into existence. You were measuring signal intensities in the mass spec as part of the workflow for quantitation.
One of the philosophical problems behind ICAT is that right from the beginning, it was only a binary set of reagents — you could only look at two things: normal cells versus diseased, et cetera. If you end up doing multiple experiments, it's always a multiple, two-way comparison — A versus B, and A versus C, and then B versus C — it gets very complicated and time consuming.
So the question was, 'How could you multiplex this? How could you increase your efficiency?'
One way is to take the ICAT approach and say 'The two ICAT reagents I have are mass delta zero and mass delta nine.' Well, I can easily go out and make a set of ICAT reagents, like a zero, a delta three, a delta nine, a delta seven. The problem is you've now made your mass spec world a lot more complicated, because every species now exists in three, four, five peaks. And in very complex separations, like a whole-cell lysate from a mammalian cell line, everything's going to overlap, and it's going to be so complicated.
That's when I started thinking, 'I wonder if we can make everything the same mass initially?'
So somehow you've got to figure out how to do that in the mass spec world, and then get a signal somehow that differentiates all the members of the set that you've put together. It all came together in 1999 and 2000 as to how to actually do that.
Can you give a brief overview of how iTRAQ works?
Basically, you have these small chemical tags. The original generation was only about the size of a single amino acid. And essentially, the molecule itself is two parts. It's going to break up when you fragment the peptide. There are basically two bits to the tag — one part, when it fragments, is going to give you a peak somewhere in the mass spectrum that's hopefully distinct from any other peptide-produced peaks that you can use for your quantitation. We call that the reporter, or signature ion.
The other part is the balance part of the molecule. The way you make the reporter ions distinct is you enrich them with stable isotopes. So you turn 12Cs into 13Cs, and 14Ns into 15Ns. And to keep the whole isobaric on both sides of the tag, you've got to have the same number of enriched centers. So you kind of swap enriched centers between the reporter part of the molecule and the balance side of the molecule, so that the overall mass of the tag remains the same.
So you can only get your quantitation information when you fragment them in MS/MS experiments.
How did iTRAQ become commercialized?
I developed the concept of it at Imperial College. We made a couple of demonstrator molecules to show that it would work.
Then in 2002, I was invited by Steve Martin to come and join the Proteome Research Center at Applied Biosystems in Framingham, Mass. So I laid aside academia and came back to the States. Then I introduced them to the iTRAQ idea, although it wasn't called iTRAQ at the time. And we pretty much took the concept and the proof-of-principle molecules through to what you saw as first-generation iTRAQ, the 4-plex.
Thinking it up in the lab, and making the first sets of molecules for proof of concept — that was only a fraction, I'd say less than five percent of the rest of the process to take that into a stable, commercial product.
What were some of the major obstacles you had to overcome to take iTRAQ from proof-of-principle to product?
One of my definitions of a good method is that it has to work in spite of what people try to do to it. So basically, to take a conceptual molecule that works, but is kind of difficult to handle, into something which is there as a kit on a shelf that works in most people's hands on most things, most of the time, requires a lot of attention to detail. There's a lot of things — just making sure people are aware of the chemistry they're doing, buffering compatibilities, making the reagents stable enough, and designing them so you can make them commercially at a reasonable cost — there are an awful lot of things that go into making a product that's workable.
You can make lots of molecules in the lab, and show that they work, but they're hard to handle — you have to use them in the dark, or something like that. They're just unworkable in a normal biology lab environment.
When you developed this 4-plex iTRAQ, were you always thinking that you could go further with multiplexing?
Absolutely. The reporter-ion part of the molecule is exactly the same between the 4-plex and the 8-plex. We synthesize it ourselves, and it's a great, compact structure for being able to put in a lot of enriched centers.
Is there a limit to how much multiplexing you can do?
Eight is not the limit. There's still a little but more room for growth. The main problem you're going to start running into is that the tags themselves — you don't want them to get too big. The laws of physics are against you if the tags start to get too big. The kinetics of the reaction become lousy, and you'll never get complete reaction of every possible group.
So we're not there yet, but that is ultimately what is going to put an upper limit on this kind of thing.
How have you tested out the 8-plex iTRAQ so far?
We're working through production issues right now. In the lab, we've had the opportunity to play with it. We're working on this cell cycle system with this group at Stanford under Lucy Shapiro. It's pretty interesting. We're using the 8-plex with a time course — so follow this bacterium as it goes through an unusual, asymmetric cell division, all the way across the whole time course, which is about 140 minutes, in 20-minute steps.
We're just collecting the data now, and it's looking really good. For a couple of thousand proteins that we're going to end up identifying, we can effectively follow their expression profiles across the cell cycle, as they disappear, get degraded, get modified, etc. You just get all of this in one experiment, and you get a tremendous amount of information.
So it's pretty advanced. We're way past tinkering around with the reagents and figuring out that they work. They work. They work as well as the original set. The reaction's kinetics are a little bit slower — the reaction takes about an hour, instead of about 15 minutes. Chromatography behavior and mass spec behavior is exactly the same, and bang! You get more information in the same time.
When do you think the 8-plex will be released?
We're still going through the production, scaling up production. It will most likely be some time this year. It could be late fall, towards the end of the year. If you watch over the summer, there'll be updates.
What is the competition for the new 8-plex iTRAQ? Have you dealt at all with i-PROT, the isobaric technology made by Agilix that now belongs to PerkinElmer?
Nope. And I don't intend to. With iTRAQ, we take it through, we work it through all these issues to make it a viable reagent, then you stick it out there in the marketplace. So basically, you give it your best shot, and the marketplace is what is going to decide on these things. So this is a question you should ask in five years' time when you see what's still around. Another one of my definitions of a good method is it has to be around in five years.
So I'm not really worried about what the other guys are doing. They'll put their products out there, but ultimately it's going to be the user who decides what works for them, for what applications.
What do you think of the relatively new label-free methods for quantitation?
That's one thing that's becoming more widely accepted in the scientific community. One thing that's hard to accept is that the changes we see in cells are actually pretty small. Most of the time you're dealing with changes of between one and two fold. So a lot things only change by 20 percent, 30 percent, 50 percent. So you have to have good precision and good statistics.
One thing that's emerging with any of the isotope enrichment methods — this includes ICAT, iTRAQ, SILAC — is that in experiments that we've done and others have done, we typically see standard deviations of around 20 percent for a big experiment with whole-cell lysate. That means we can see changes of 30 percent — a 1.3 or 1.4 fold increase, and we can be statistically very confident about the magnitude of that change.
With the label-free methods, the confidence that you can get of the magnitude of a change is much lower. I don't think anyone could, unless they ran dozens of experiments, claim to see a 20 percent standard deviation of measurement. It's usually much higher — usually a two-fold or three-fold expression change.
In other words, the stable isotope methods seem to give you greater precision. And that, I think, is becoming apparent from papers that are being published generally.
That's not to say that label-free methods don't have a place. One issue that you have with any of the labeling methods is, of course, the cost. So if you wanted to compare hundreds of thousands of patient samples in a study, right now the cost of labeling reagents might be too high.
So at the first pass, certainly, your precision might be lower with label-free methods — you might only get to see proteins that change a lot; two-fold or three-fold — but isotope-free techniques could be pretty good.
Speaking of cost, will the 8-plex be significantly more expensive than the 4-plex?
We won't really know until we've figured out manufacturing costs, but the feeling right now is that per analysis, the cost will be roughly equivalent. That's the thinking at the moment.
Do you think people will still use the 4-plex iTRAQ?
That's a pretty interesting question. The 4-plex molecules themselves, being nice and small, they make great small-molecule labels as well, for example, for amino acid, lipid, or steroid analysis.
When you get to the 8-plex, the tags themselves are getting much bigger. Imagine you have a huge tag, and one amino acid on the end of it. Now the properties of the molecule are dominated by the tag, so the chromatography is going to be very difficult.
For proteomics, you're tagging much larger molecules as your starting set. There's probably no reason at all for people to not move to the 8-plex.
The real change in proteomics right now is that it's becoming quantitative. Things like iTRAQ are just taking that thinking into the shotgun, peptide-based proteomic workflow. It absolutely had to happen.