Skip to main content
Premium Trial:

Request an Annual Quote

Q&A: Pfizer's Dominic Spinella on the Challenges of Using NGS in Clinical Trials

Premium

Dom Spinella picture.jpgName: Dominic Spinella
Title: Head of translational and molecular medicine at Pfizer Biotherapeutics Division
Experience: Head of translational medicine, Pfizer Oncology
Faculty in the department of medicine and microbiology and immunology, University of Tennessee Medical School
La Jolla site head of experimental medicine, Pfizer
various positions at Chugai Pharmaceuticals, 1996 - 2004
Education: BS in biology from Syracuse University, 1976
MS in genetics, Rutgers University, 1980
PhD in immunology, Rutgers University, 1982
Age: 57

Next-gen sequencing has made significant inroads into the clinical space over the last several years as a handful of diagnostic tests based on targeted sequencing have hit the market and a growing number of clinicians have adopted whole-genome sequencing to diagnose rare disease and guide therapy.

Despite these advances, however, pharmaceutical companies have been reluctant to broadly adopt the technology beyond early-stage discovery. Downstream applications of the technology — either as a way to stratify patients by drug response or for use as a companion diagnostic method — are currently not on pharma's radar.

During a panel discussion that was webcast from the Biomarker World Congress meeting last month in Philadelphia, representatives from pharma, academia, and health information technology companies discussed how — and whether — next-gen sequencing would impact the development of companion diagnostics.

Dominic Spinella, head of translational medicine for biotechnology and biotherapeutics at Pfizer, suggested during the discussion that while it is a useful technology, next-gen sequencing would not be adopted by pharma for companion diagnostics. Rather, he said, he believes it will remain a tool that is primarily used early in the research and discovery phases of drug development.

Spinella recently elaborated on his viewpoint in an interview with Clinical Sequencing News. The following is an edited transcript of the interview.


What do you see as the role of next-gen sequencing in pharmaceutical companies?

As DNA sequencing polymorphisms becomes more and more important for understanding the prediction of drug response — either efficacy or adverse events — and as the technology becomes cheaper and more accessible, I think that it is going to become quite useful to try and do that, to try to elucidate those mechanisms.

From a research perspective, one can envision ways where patients are treated with drugs, their DNA is sequenced, and we start asking questions about what polymorphisms seem to correlate with the phenotype of interest, whether it is drug responsiveness, whether it is adverse events, or what not.

So, clearly this is no different in principal, at least from my perspective, than any of the other large-scale, hypothesis-independent technologies that have cropped up, [such as] microarrays, proteomics approaches, whole-genome SNP arrays, and now whole-genome sequencing.

From a research perspective, in order to try to understand these predictors, I think this is going to be a useful tool, as the others have been.

I draw a distinction between a technology that is used for research purposes, where I don't know what the predictors are and I'm going to try and find them out, … and a diagnostic, [in which] case, I already know what the predictor is, what the stratifier is, and I'm going to analyze it. I don't see whole-genome sequencing as ever becoming a diagnostic.

At what stage in drug development are you using next-gen sequencing at Pfizer?

It really varies and it's used in a variety of stages, not just one, but always in the research phases.

We've collected samples — tumor samples, blood samples — from a variety of patients that are enrolled in our clinical trials. We're going to start asking questions after we have the clinical data. We have certain patients who have responded, and certain who haven't and we would like to know if there are any correlates with that in the genome sequence. So, as the technology becomes more widespread, we will perform that. So that's in the context of a clinical study.

In the context of earlier stages in research, we may, for example, test a panel of tumor cell lines, and ask the question, 'Which ones show the greatest or the least amount of response to this new drug that we're testing?' and then ask the question from a genome sequence perspective, 'What are the correlates of that?'

[ pagebreak ]

What are the problems with using whole-genome sequencing in developing companion diagnostics or in the context of a clinical trial?

It's important to remember — and this is a limitation that people aren't thinking about — everybody's very enamored with the platforms, and they are a technological tour de force, there's no question about that, but the statistical questions are crucial. And that is, it doesn't matter if you are looking at polymorphisms, … variants in protein, or variants in expression level of transcripts. These are data points. And if you look at enough data points with few enough samples, you're always going to find correlates.

You can imagine that if I had a drug that I gave to 100 patients, and I've got a gradation of response, some patients are very good responders, some patients are very poor responders, and most are somewhere in the middle. I have more or less a normal distribution.

And I would like to find out, are there any DNA polymorphisms that correlate with that? So I'm going to take the 10 best responders and the 10 worst responders to my drug and I'm going to do a whole-genome sequence.

I'm going to find loads of things that are unique to one set, that are present in one set and not in the other. Loads of them. Tens of thousands. Most of them are going to be just by random chance. I can take any group of people at random, off the street, and another group of 10 people, and I will always find some discriminators between the two just by random chance because I'm looking at a small sample size and millions of analytes, in this case, DNA polymorphisms. That's called classical data overfitting.

In order to deal with that, you have to either increase the sample size or you have to play some statistical games. But even that doesn't get around the problem when you have just dozens of samples and millions of analytes.

So what you have at the end of that kind of research is a hypothesis. You say, 'Alright, I've identified a couple of dozen of DNA polymorphisms that seem to be present in only the responders and they seem to be absent in the non-responders.' So now what am I going to do? Well, I'm going to test that hypothesis. I'm going to have to run another clinical trial with my drug. I'm going to have to stratify the patient population into … my signature-positive patients and my signature-negative patients, give both of them the drug, and see if my hypothesis is confirmed. That's the only way to do this.

And I have to say that the vast majority of the time that we do that, those hypotheses turn out not to be very useful. They were just random, spurious correlations that didn't get borne out when you actually tested them. That's been the pattern ever since these large-scale genetic and genomic analyses became possible.

In the panel discussion about the use of sequencing to develop companion diagnostics, you made a comment that the development of a companion diagnostic with sequencing would lag behind the drug development process, rendering it unhelpful for pharmaceutical companies. Can you explain what you meant?

You're getting at a comment I made at the conference about lagging behind. I'm talking about the discovery approach, where I don't know what the molecular predictors are. I do not know what polymorphisms, what SNPs are involved in predicting drug responsiveness.

So, I'm doing a phase II clinical trial in patients and I identify those patients who are the best responders and worst responders to my drug, just on the observation of pure clinical benefit.

And then I go ahead and sequence them all and I identify those hypotheses I've alluded to. I've identified some SNPs that are found only in the high-responding population and some SNPs only in the low-responding population. They may or may not be related. They may be purely coincidental, but I'm going to test it. So that means I do another study to actually test the hypothesis. I've already finished a phase II study in order to generate the hypothesis in the first place, and then I have to do another phase II study to test the hypothesis. Meanwhile, the drug marches on. The drug is in phase III.

By the time I've actually confirmed the hypothesis, and say, 'Aha, I've identified the molecular predictors of responsiveness,' often the drug has already failed its phase III study because I've tested it in all comers rather than just the subset of the population who now I know how to identify who would have responded.

And if I did that, and I go to management and say, 'Hey look, this drug that failed this $400 million trial, I now know how to rescue it. I know now that it's just this subset that we need to test the drug in. We need to do another $400 million phase III study.' Meanwhile, the patent life of this drug is ticking away. What do you think my senior manager is going to tell me? 'Don't let the door hit you on the way out.'

[ pagebreak ]

Let's think about the opposite situation. Suppose that the phase III study succeeded. Suppose that we had enough benefit across the all-comers population to warrant an approval by the [US Food and Drug Administration]. And then I go back and say, 'Hey look, all of the clinical benefit was in this subset. We don't have to expose these patients who are not going to benefit at to any risk, and we don't have to burden the healthcare system with paying for this because I can now identify that subset who really is responsible for all the clinical benefit here.' So what's going to happen? The FDA is going to love it. Medicare is going to love it. They'll change the label to make sure we have a co-diagnostic to identify these patients and we do not give our drug to any patient who does not have that appropriate polymorphism.

We're going to cut down the population that's going to get this drug. The average clinical benefit is going to be higher, but you think the reimbursement for an already marketed drug is going to change? I promise you it will not. So all we're doing is cutting down our market, but we're not changing the reimbursement because that's not the way the game works at the commercial level. So as you can see, whether it passes or fails in the phase III setting, I've lagged behind. By the time I've developed my test and confirmed my hypothesis, it's almost too late to do anything about it.

What about holding off on doing the phase III trial?

Every drug has a patent life. That patent is essentially 20 years from the time it's issued. Trying to ascertain when to get that patent is a tricky game. Usually it's garnered right around the time of early-stage preclinical work. You want to be sure we're protecting the drug from competition from others, otherwise somebody else will patent it. However, once it's issued, the clock ticks.

The longer I delay, the more patent life and the more potential money from this drug is lost. Delay is anathema to the pharmaceutical industry. You can't just wait. You need to move and move quickly. Think about [Pfizer's cholesterol drug] Lipitor. Lipitor is making $11 billion a year. Every day you delay, that's millions of dollars that's lost. That's the way the pharmaceutical industry thinks about these things, as they should.

Some researchers have argued that you can use whole-genome sequencing to increase the statistical significance of a drug trial, by using it to select for patients who will be most likely to respond. But you seem to be saying the opposite?

If I know because I've done the research, I've confirmed in a hypothesis-testing clinical study … that this variant in this particular gene is predictive of drug response and if you don't have this variant, you're not going to respond to my drug … suppose I know that. Now, I'm going to go ahead and do a phase III trial, or it's even going to be in my drug label and I'm going to now test for that variant. How am I going to test for that variant? Am I going to sequence the entire genome in order to get that one read or am I going to do a PCR reaction just on that one gene? Certainly, right now, I'm just going to look at the one gene. I'm not going to look at the whole genome.

It may ultimately become so cheap to do an entire genome sequence that it's actually cheaper to look at the whole genome and just dig through the data by computer and look at that one site even though I've got 3 billion bases that I'm not looking at — I'm just looking at that one polymorphism, but I've sequenced all three billion. I would submit that when we get to that point, that it won't be the pharmaceutical companies or the diagnostic companies that are developing the tests in the context of a particular drug treatment. That will be done as part of routine health care.

Everyone will have their genome sequence and in some database somewhere and it's just a matter of looking it up. There is no diagnostic involved. That's where I see it going.


Have topics you'd like to see covered by Clinical Sequencing News? Contact the editor at mheger [at] genomeweb [.] com.