Name: Rohan Thakur
Position: Associate Director Specialized Mass Spectrometry at Taylor Technology
Background: Director at Thermo Fisher; TSQ Product Manager at Thermo Electron; Ion Source/Ion Trap Program Manager at Thermo Finnigan
Protein biomarker development begins with discovery, but considerable work is needed after that to validate the marker of interest. In particular, rigorous quantitation of a biomarker's presence in patient samples is required for researchers to determine whether observed changes are, in fact, significant.
Quantitating the presence of endogenous proteins in a sample is a task with its own unique set of challenges, requiring approaches different from those that researchers have traditionally used for work with exogenous small molecules.
Rohan Thakur is associate director of specialized mass spectrometry at Princeton, NJ-based biotech firm Taylor Technology, where he oversees the company's work in protein biomarker quantitation. This week ProteoMonitor spoke to him about the challenges of the process and the evolution of the field.
Below is an edited version of the interview.
What issues are involved with quantitating a potential biomarker? What are the challenges of the process?
The first challenge is getting a standard matrix, especially if it's an endogenous protein, because the levels aren't consistent in the human population. So to get a baseline level of what is actually this peptide in the normal population is itself a difficult answer.
Let's say natively you have 100 ng per mL of testosterone in your blood stream. You're younger than me. My blood stream may have 60 ng per mL testosterone. They're both normal, except if you're looking for changes in testosterone, what is the baseline? That becomes the issue. And once you hone in on that, then you have to measure every sample within that context. We set up the experimental protocol. We screen for blanks. What is a blank in that case? If the peptide is in everyone's bloodstream, what is your blank number? So we have to determine all these things in the experiment.
Once you've established a baseline, what comes next?
The next step [is getting] samples from a patient population, which could be completely different from plasma that has come from a naïve population. Treatment-naïve [patients] are basically normal patients who have completely different plasma constituents, and then you have sick patients who are in the disease state who have completely different plasma constituents. So now you have to make sure that your method works in this changed state. And on top of that these people could be taking different medication; you could have people who are smokers; you could have people who take caffeine, which could or could not be present in the plasma that was used to develop that method. So now you're testing that method. So at every step there's a pitfall that you're constantly monitoring.
What technical difficulties are involved? What techniques do you use to solve them?
When you're trying to analyze a small molecule, the delta in chemistry is huge and therefore [conventional] sample cleanup works. When you're doing proteomics or you're doing biomarker work, you're trying to clean up a peptide or a protein from other proteins in the sample itself, so now the delta in chemistry that you can exploit to clean up your sample becomes very small and the techniques that have been in good use for the last 20 years for bioanalysis actually start falling apart.
[For example], if you wanted to concentrate your sample after you did a cleanup, you just evaporated your eluent and brought it up in smaller volume. Well, you can't do that for a peptide because you may destroy it completely. So now you go to micro-elution techniques. And then a technique known as column switching might become the norm when it was the exception for small molecules. You exploit the difference in chemistry in sample cleanup and then you start exploiting the chemistry of the different columns using different column switching techniques in LC-LC-MS. So instead of doing LC-MS, now you're doing LC-LC-MS-MS. So you're just increasing the resolution of separation right from sample cleanup until it's introduced into the mass spectrometer to ensure you're not compromised in your quantitative aspect.
When did the demand for this develop? When did the biomarker work develop to the point where people said, 'Hey, we need someone on the quantitation side to help with this?'
Ten years ago, I would say, people started saying, 'OK, the genome's been sequenced, now let's start taking a look at the proteome and if there's a change in the genome we should be able to pick it up in the proteome.' That's how proteomics started.
What ended up happening was they started doing shotgun studies looking at everything in the sample and then looking at subtle changes in thousands of those proteins. So they would take this soup of proteins, digest them, get the peptides and then ask, 'Is there a sequence of peptides or any one peptide representative of a protein that changes to signal a disease state?' Initially they found a few that lit up like lighthouses for prostate cancer – the PSA test – so they said, 'OK, wow, we got this, now let’s see if there are any other diseases.'
Well, turns out it's not that easy. The screens are going on [and you might] think [you] have [a] biomarker. This thing is going up and down. One, can you confirm that that's true? And two, can you tell me by how much? … Is it going up by a factor of 2, by a factor of 4, by a factor of 1.8? Anything that's between a factor of 2 and 4 is almost impossible without rigorous quantitation.
[ pagebreak ]
What are the tools you use for this?
The easiest and quickest way to test is mass spec, and then as it gets more defined and it gets more fidelity you move into immunochemistry and immunogenicity testing if it’s a protein-based drug. If it's just a biomarker you can go either way. You know what it is, and if you're within a factor of ten then immunochemistry buys you that quite easily, but the setup is significant because you have to prepare an antibody, develop the test, and then conduct the test. So there's a frontloaded expense. In mass spectrometry, you put it on, [and] if it works, you're good to go.
So if you can do mass spec, that's the preferable way?
It's the most expeditious route. It may not be as robust because the immunochemistry test is simply a light reflectance test and mass spec has many, many moving parts. There are issues on both sides, but, yeah, if you can get away with mass spec, usually people prefer it.
What kind of mass spec do you use?
It'll be a triple-stage quadrupole for quantitation.
Does the process ever get contentious? Say, for instance, if you come back with a result that says a potential biomarker isn't actually valid?
It's not contentious. [Researchers] realize that there are many nuances to this measurement, and it may not pan out. Because more often than not you are misled in discovery. In discovery your goal is to look for change. In quantitation our goal is to actually discount any change. So the way we approach the problem is very, very different. When you're seeking change, you're going to start believing in change. We take the opposite approach and say, 'Alright, these are all the checks and balances where change does not occur,' and then we say, 'Oh, look, it meets all these different criteria, so change must have occurred.' It's a different way to look at the same problem.
What percentage of biomarkers brought to you turn out to be valid?
I don't have that answer because usually when it comes to me they have a good handle that it is changing, and they just want to know by how much exactly. Let's say they find 50 peptides that go up and down. It costs a lot of money to look at 50 peptides rigorously, so they'll do a lot of prep work and say, 'OK, let's hone in on five of these.' In the old days they would just throw it at us and say, 'Hey, here's half a million bucks, tell us what it is.' Now that stream of cash has dried up, so they tell us, 'Here's three peptides, here's $50,000, please let us know by how much they've gone up or down.'
Is this frugality due to anything that's changed in the industry in particular, or is it more just a matter of the larger economic climate?
I think it's just adjusting the expectations. This stuff isn't easy, and you're not going to hit this every time that you find something. And with the current business climate the way it is, you're trying to mitigate risk at all steps. When business is good you throw caution to the wind.
What does this process cost? If someone comes to you with a biomarker and needs it quantitated, what's the price range?
It's usually around $10,000 and up. $10,000 for a feasibility study up to about $100,000 for a full-fledged single-cohort study.
How is demand for this sort of service trending?
It's double-digit growth every year. Toxicity remains a huge issue, and one way to look at toxicity of small-molecule drugs is to look at a biomarker. That's what's driving a lot of this. It's nice if you can find a biomarker for [diagnostics], but if not, the next best thing is toxicity, because 50 percent of all drugs are still small molecules.