Skip to main content
Premium Trial:

Request an Annual Quote

The Future of Pharmacogenomics from the Small-fry Perspective

Premium

While big pharmas lumber along the pathway to personalized medicine, a few nimble, pharmacogenomics-focused firms are defining the field

 

By Adrienne Burke

 

What do you get when you put a group of senior managers from three of the world’s leading pharmacogenomics companies in a room together for an hour? First, several definitions for the word “pharmacogenomics” (see sidebar, p. 54). Then, a fascinating conversation about the cutting-edge business of personalized medicine.

As a counterpoint to our cover story this month (p. 40) on how pharmaceutical companies are struggling to integrate the tools of pharmacogenomics into the drug discovery process, we went to the companies that have pharmacogenomics at their core. We asked them to talk about the technological and strategic barriers to their own success, and for their points of view on big pharma’s adoption of the new paradigm they call pharmacogenomics.

Our panelists, in order of appearance, are Colin Dykes, CSO of Variagenics in Cambridge, Mass.; Gualberto Ruaño, CEO of Genaissance Pharmaceuticals of New Haven, Conn.; and Trevor Nicholls, CEO of Oxagen in Oxford, UK.

GT asked Michael Liebman, director and computational biology investigator for the Abramson Family Cancer Research Institute and a professor of cancer biology at the University of Pennsylvania Cancer Center, to moderate the discussion.

 

Michael Liebman: Will you each give some indication of what business model and focus your company has?

 

Colin Dykes: There are two wings to our strategy. Both will have the same end product, i.e. commercially available tests that will allow physicians to select the safest, most effective therapy for their patients. The first is partnering with biotech and pharma to identify markers for response to compounds in clinical development. If a drug hasn’t reached its efficacy targets in Phase II, but clearly works well in some people, we perform a retrospective analysis of the data, screening for markers associated with response or lack of response. Then we can formulate a hypothesis to test prospectively in a Phase III trial. Like Genaissance, we believe that haplotypes are the fundamental units of variation, and that’s what we use in these studies.

We also have our own independent medical program focused on diseases we think of as the most tractable for pharmacogenomics and where there is the clearest medical need for tests predicting drug response. So one of our main efforts is development of tests to predict patients’ response to cancer chemotherapeutics, an area where there is great need, and potential, for improved treatment regimens. This program is independent of the partnering strategy.

 

Gualberto Ruaño: We have created a two-pronged business model to commercialize our technology. One concerns technology and drug-specific partnerships, and we have also begun over the last two months our own drug development program — the HAP Drug Program. HAP Drugs [will be] developed, prescribed, and marketed for specific populations of patients being defined by genetic profiles — in our case, haplotype profiles. The HAP Drug Program will be enhanced by in-licensing other compounds as well.

 

Trevor Nicholls: We take more of a discovery focus, aiming to discover novel genes and disease pathways, common inflammatory metabolic and endocrine diseases. We’ve built up very large collections of family samples, all with detailed phenotypes, and then analyzed these using first a linkage analysis approach, but then association analysis in linkage regions to try and identify novel drug targets and diagnostic markets.

By identifying disease genes you are identifying disease pathways. We are aiming to identify subtypes of the disease based upon mechanisms so that you can then target a therapy much more effectively based upon the mechanism that is defective in the individual who is affected by that disease. It’s a holistic, therapeutic-area-driven approach rather than a functionally driven approach.

ML: Are you finding that this is a change in pharma? Pharma has had such a long history of being organized into functional silos.

 

TN: It depends on the company. Our original partnership was with Astra, which had a therapeutic-area approach to the way they were organized. At other companies we have seen much more of the functionally driven approach, but then companies like GlaxoSmithKline are transitioning to this therapeutic-area approach and a much more multidisciplinary approach to the whole discovery and development process.

Because we have built up large, well-phenotyped clinical collections, we’re getting pharmaceutical companies coming to us to do target validation in those collections. So that’s more of a candidate-gene approach — mining for SNPs in those potential targets and looking for disease association.

ML: When I was in pharma, I tried to ask our customers, “What is a valid target?” And I found there was a very wide range of definitions for a valid target. How do you find the issue of target validation as an opportunity?

 

GR: We’ve always said that the simplest, most fundamental, and certainly least controversial component of pharmacogenomics is the target itself. There are two fundamental utilities of our technology. The most fundamental is variability in the domains of the target that will be interacting with the chemical entity in the case of small organic molecules. In the case of recombinant DNA proteins, which target do you clone and produce for manufacturing purposes because you want to make hormones that have a given distribution based on the demographics of people? You don’t want to clone a form of the hormone that is very rare. On the other hand, you don’t want to screen your compounds against a form of the receptor that is very rare because you didn’t know the frequency.

So there is, if you will, a fundamental frequency demographic that you can start with — this is a fundamental utility of the haplotypes. Beyond that, you start getting to more value. You start to say, “Fine, what is the distribution of this target in the diseased population? Is this target also distributed similarly in other diseases?” And there you say, “Is the frequency of this haplotype different in this population of patients versus the other population of patients?” If it is, then that is a hint that it may be involved in a complex process as opposed to just a downstream process.

 

TN: People jokingly say that the only validated target is one where there’s a billion-dollar drug already on the market. To be slightly flippant, our own experience in taking targets to big pharma is that they always want one stage more validation then you’ve actually done.

But it is encouraging to me to see companies now really beginning to take seriously something we’ve all been saying on conference platforms, which is that you should bring in genetic profiling of targets much earlier on in the discovery process. So start getting these profiles of genotypic or haplotypic variation before you go into compound screening. And use that information to plan your strategy going forward.

If you’re trying to implement pharmacogenetics at Phase III it is far too late and far too expensive.

 

GR: It is a fundamental point that Trevor is making. The idea of genomic control, where you want to balance the genotypical component of each of the arms, is a way of not only doing prospective analysis but also doing retrospective analysis in the case of trials that may have failed. In clinical trial analyses, you can now analyze populations based on genomic profiles as opposed to all patients on the active, all patients on the placebo. And this is an opportunity to leverage the investment of clinical development into the discovery of markers or targets that actually has importance to the drug response as the initial screening target that is utilized.

The genotypic and haplotypic information is built in. It is an organic design to the drug. We believe that the genotyping and haplotyping is a marketing advantage and we therefore want to carry this all the way to the doctor’s office.

 

ML: That makes perfect sense until you talk to the marketing management and they tell you they don’t like segmenting the market!

 

TN: I think pharma marketers are seeing it as less of a threat because they’re beginning to realize [that] empirical pharmacogenetics has been going on in the doctor’s office for many, many years. You know: “Take these tablets for two weeks and come back and see me and let’s see how you’re getting on.” And if the patient hasn’t responded or is getting minimal side effects then there’s a change of the drug prescription.

 

AB: Can you talk about current limitations of the technologies of SNP discovery and genotyping and how you see them advancing — or not?

 

GR: Where we have now spent most of our technological effort is on creating technology and algorithms to rapidly unravel the connection of a given series of haplotypes for multiple genes. For instance, we’re looking at 200-plus genes involved in cardiovascular homeostasis and abstracting from the polymorphism variation of the haplotype level in these 200 genes. We are concentrating our efforts now on the technology that allows us to create genetic profiles based on multiple markers that we can then commercialize for our HAP Drugs. And that is a challenge. I think that’s where [we] will push forward genetics all the way from the haplotype to the phenotype in a way that can be utilized to demonstrate superior efficacy and safety of a given pharmaceutical drug.

 

TN: The challenge that hits us technologically is still going from linkage to association. The SNP genotyping technologies that are available today are still, frankly, not up to the task, in terms of throughput and cost per assay. There’s still a factor of at least 10 to go before they’re there.

But the other issue that comes up is that if you’re going to do high-density SNP association, current assay formats use up a lot of that precious DNA. So I’m very attracted by technologies like the Illumina technology, which offers the opportunity to do a large number of SNP assays in one DNA sample.

 

ML: Is reproducibility of the experiment at a price point where you can analyze enough samples to have good confidence for later analysis?

 

TN: We’re in the process of genotyping 2,000 SNPs in a linkage region across 2,000 samples in one disease. So that’s fairly powerful statistically. But the key reliability question is can you pull a SNP from available databases and turn it into an assay that will work in the wet biology sense. And there the failure rate is still quite high. It’s around 50 percent if you use the public-sector databases.

 

CD: I think it could be worse than that because the public databases contain many errors. Developing and validating the SNP assay is the major cost and much higher than the cost of actually running the assay. The development cost will then be amortized over the number of times the assay is used, but I suspect that it is often ignored in the prices quoted for SNP assays. Nevertheless, the cost is dropping to the point where relatively large studies are becoming feasible.

 

TN: We’ve just bought into the Celera database because the fidelity of the SNPs that are in there are much higher than any in the public sector. And it is one of the first purchases of that type that I’ve been able to do a very straightforward cost benefit analysis for.

 

ML: I assume that for all of you one of the major infrastructure costs is associated with the informatics. What limitations are you confronting in terms of informatics?

 

CD: I’m not sure that I’m aware of any informatics problem. Basically the more good programmers you can get into your building, the more useful programs you’re going to get out. It’s pretty much a numbers game.

 

TN: We’re putting in our own software programmers to develop informatics infrastructure to handle this complex mix of experimental data, genetic analysis, and phenotype data and that’s a reflection that there are no good products available off the shelf out there.

 

GR: Oh, absolutely. There is nothing off the shelf.

 

ML: Bioinformatics has conventionally taken a genomics perspective and you’re all addressing data most closely related with the clinical side. So what would you suggest are potentially new requirements for people who are now being trained?

 

TN: It’s the blend of the type of software written to cope with clinical trial data, the type of software that’s written as LIMS programs for pharma companies, and then a specialist software that is written for genetic analysis. At the moment you can buy packages that address one of those needs specifically but it’s the integration of these three…

 

GR: And eventually the final level of informatics would be at the doctor’s office where a decision is made to prescribe this product at this dosage. That is the ultimate challenge from the informatics side.

 

TN: When pharma companies come and look at the genetics-focused informatics that we’ve built up they get interested in partnering with that element of it because, with the possible exception of GSK, very few of the pharma companies have built genetics groups that are at the critical mass that each of our companies are at. And we’re focused day in and day out on genetics, whereas their groups are much smaller and can’t really command the informatics support they need to build up a bespoke system. So they’re actually coming in and saying, “Hey, this is really interesting. Can we have access to this platform as part of the partnership we’re talking to you about?”

 

ML: Do you find the same kind of response, Colin?

 

CD: Basically yes, although the response varies from company to company. The medium-sized pharmas and large biotechs tend to be more responsive than the larger pharmas, because they usually have smaller drug pipelines and therefore have more invested in each drug candidate. You can be seduced by trying to go for big collaborations with big companies but your time is probably better spent going to the smaller organizations who lack the internal resources to perform these sorts of studies themselves.

ML: At one point, modeling in small molecules was considered a way-out-there technology and now it is, as it should be, just one of the tools. We’d all like to think pharmacogenomics is going to go that way too. What kind of time frame do you think it’s going to be for that?

 

TN: In terms of people doing genetic validation of their targets — SNP mining extensively, looking for disease association of some of those SNPs, and using that information to plan some of their clinical trial — that’s only a very short time away. We’re seeing more and more pharma companies incorporating that as part of their discovery process really before they put a target into screening. We’ve got a time before things work their way through the pipeline, but there’s a major shift in culture that we’ve got to work through with the medical profession to get them to accept this type of approach.

 

CD: In terms of finding markers for cancer drugs, there are already enough data to develop tests that will have high predictive value. And there are companies already doing this. Timelines will be affected by the regulatory framework that’s applied to genetic tests before a physician in his office can use them. It could be two or three years before you get molecular genetic diagnostic tests for drug response approved by the FDA. But “home brew” type tests not requiring FDA approval could be available much sooner than that, in months rather than years.

 

TN: I think you will see the FDA getting much tougher on genotyping related to safety issues. The other constituency that is going to drive this is the healthcare providers because if they get a sense that there is a test out there that can tell if a patient will respond or not to a $2,000 or $3,000 course of therapy, you can bet that the healthcare providers are going to mandate that the test is used before the drug is prescribed.

 

Pharmacogenomics and Pharmacogenetics Defined

 

Colin Dykes, Variagenics: Pharmacogenomics is the use of any information that is derived from the genome, whether it is genetic and inheritance is involved or whether it’s just purely the study of genomes, to define markers for drug response.

 

Gualberto Ruaño, Genaissance: Pharmacogenomics is personalized medicine. The personalization of the treatment could be in terms of dosage, the actual medication, or in the context of the indication you were after, but by the end of the day it boils down to using the DNA of each person to guide therapy and develop personalized therapeutic products.

 

Trevor Nicholls, Oxagen: Pharmacogenetics [is] the understanding of the genetic profile of individual patients to enable them to receive the most effective and safe treatment for the particular condition they’re suffering from.

 

Michael Liebman, University of Pennsylvania Cancer Center: We expand the definition somewhat because we’re looking at genetics as having the base for which interaction with environment and lifestyle actually produce the phenotype, and the phenotype is something that develops over time. So the phenotype coupled with the genetics becomes the complex that you’re treating when you’re seeing someone with a disease.

 

The Scan

Billions for Antivirals

The US is putting $3.2 billion toward a program to develop antivirals to treat COVID-19 in its early stages, the Wall Street Journal reports.

NFT of the Web

Tim Berners-Lee, who developed the World Wide Web, is auctioning its original source code as a non-fungible token, Reuters reports.

23andMe on the Nasdaq

23andMe's shares rose more than 20 percent following its merger with a special purpose acquisition company, as GenomeWeb has reported.

Science Papers Present GWAS of Brain Structure, System for Controlled Gene Transfer

In Science this week: genome-wide association study ties variants to white matter stricture in the brain, and more.