By John S. MacNeil
The US Food and Drug Administration is under a lot of pressure these days after issues like the uproar over findings that Vioxx is less safe than previously assumed. But not all changes taking place within the agency today are reactions to such pressure. In fact, FDA administrators in recent years have made efforts to update their review process to take advantage of new technologies and clinical strategies — presumably not an easy task for a government agency of 9,000 employees and a 2005 budget of $1.8 billion.
One stimulus for change within FDA came from a white paper known as “Challenge and Opportunity on the Critical Path to New Medical Products,” which was published in its final form in March of last year. Among other recommendations, the authors discuss the role of new technologies derived from genomics in the regulatory review process.
If you think this is only important to pharmaceutical companies; think again: how FDA scientists adopt (or ignore) new types of gene expression, SNP pattern, and whole transcription analysis will have a significant impact on researchers both upstream and downstream of the regulatory process. Essentially, when FDA decides that a new type of experiment is worth factoring into the review process, the validity of that experiment skyrockets. With the widely anticipated release of the voluntary pharmacogenomics submission guidelines — expected out in mid-March — the agency was set to provide a few more clues to how its scientists will gauge which new types of experiments and corresponding data are valid enough to carry weight in the drug approval process.
One of the scientists behind this effort to embrace new genomics technologies is Felix Frueh, a 37-year-old biochemist originally from Switzerland who is now serving in the newly created position of associate director for genomics at the FDA’s Center for Drug Evaluation and Research. Since last May, Frueh has also served as the head of the Interdisciplinary Pharmacogenomic Review Group, a panel of scientists drawn from several FDA centers that evolved from the original working group responsible for drafting the first version of the agency’s guidance document on pharmacogenomic data submissions.
As Frueh describes it, the role of the IPRG is to create a system that allows FDA and industry scientists to discuss the impact of voluntarily submitted pharmacogenomic data in such a way as to keep that data out of the hands of reviewers actively involved in reviewing the corresponding drug application. Such a “conflict-free” environment, Frueh says, is vital to ensure that industry actually submits pharmacogenomic data at all. For its part, pharma’s motivation for submitting such data — even when it’s not required as part of a drug application — involves the potential benefits of less complicated (and less expensive) clinical trials if FDA officials approve certain types of genomic data in lieu of clinical endpoints.
The IPRG will include scientists from at least four of FDA’s centers: the Center for Drug Evaluation and Research, the National Center for Toxicological Research, the Center for Biologics Evaluation and Research, and the Center for Devices and Radiological Health. The 15 to 18 scientists currently working with the group are only part-time members of the IPRG — they all have responsibilities within their centers that require most of their attention — and thus the level of their involvement varies with the amount and complexity of the pharmacogenomic data industry tends to submit.
Frueh says the IPRG will include FDA officials with the wherewithal to make judgment calls on how to evaluate the genomics data, as well as researchers with policy-making experience. In addition, the group will potentially have to identify FDA scientists capable of performing more in-depth analyses of the data — while at the same time ensuring that other FDA scientists reviewing an IND or NDA are not exposed to the pharmacogenomics data.
One of the challenges of reviewing pharmacogenomic data submissions, Frueh says, lies in the inherent complexity of large data sets such as whole genome SNP scans, entire transcriptome analyses, or even patterns of gene expression. When considering a simpler type of experiment, such as detecting the presence of a single genotype, often scientists already know quite a bit about what that marker’s presence or absence means biologically, Frueh says. But establishing a connection between biological behavior and patterns of gene mutations, for example, is new and potentially more challenging, Frueh says.
At the moment, the voluntary submission guidelines cover only genetic or genomic information; protein expression and metabolite data are still on the sidelines. Although in principle there is nothing preventing industry from submitting other types of data to FDA, Frueh says that the complexity of the technology and the current knowledge and sophistication in proteomics/ metabolomics hasn’t reached a stage at which it would be reasonable to issue regulatory guidances. The principle would still apply, he adds, but proteomics and metabolomics are “too young, too early-stage” to warrant including — or even considering to include — these types of data in the regulatory process. “But you’ll hear a lot about [these types of data] in the future,” Frueh says.
On a broader scale, incorporating these types of data submissions into the drug approval process should help streamline the application review, says Bob Temple, director of the office of medical policy at the FDA’s Center for Drug Evaluation and Research. For example, he says, about half of the drugs that enter Phase III clinical trials never end up on a New Drug Application. By designing Phase II trials to be more definitive in determining whether a drug has promise — in part by introducing surrogate markers for measuring efficacy — Temple hopes that both FDA and the drug industry can spend less effort on drugs that will never reach the market.
Temple is quick to counter any suggestion that streamlining the drug approval process is akin to lowering standards for safety. By nature of the three-phase approval process, making the Phase II evaluation of a drug’s potential efficacy more efficient is completely separate from evaluating its safety in human beings in a Phase I. “[Safety and efficacy] are really two quite separate matters,” he says.
And prodding a large bureaucracy like FDA into making changes is no trivial matter, Temple acknowledges. “We’re much more inclined to have interdisciplinary meetings,” he says, “and even more inclined once we all move into the same building in July or August.”
In spite of the challenges of incorporating new technologies, Temple thinks the issues are manageable: “The technologies are new — the issues they bring up are not so new,” he adds. Using a genetic test as a marker for susceptibility to breast cancer is not fundamentally different from the old evaluation that took into account family history of the disease, he says. “A lot is territory we’re familiar with.”
EAGERLY AWAITING PGx GUIDANCE
As this magazine was going to press, FDA was days away from releasing its guidelines to industry for submitting pharmacogenomics data to the agency on a voluntary basis, or so they said. The problem is, the agency has said many times before that the document would soon be made public. Initially released as a draft in November 2003, the guidelines’ release has been delayed numerous times to incorporate changes to the text. “We completely underestimated the time it would take to incorporate changes from the three centers, and from industry feedback,” Frueh says. Each change from one center had to be approved by the other two centers, as well as by the legal and policy people, he adds. “Perhaps next time it will be faster!” The guidance will appear on a special FDA website upon release, along with a companion Manual of Policies and Procedures and “frequently asked questions” documents.