After two days of workshop meetings between regulators and the drug development industry in Washington, DC, last week, one thing is clear: The US Food and Drug Administration will rewrite some of the suggestions it advanced in the ground-breaking “Guidance for Industry: Pharmacogenomics Data Submissions” document it released for comment two weeks ago
For the microarray industry, this means that the pathway for the regulatory use of data that is derived from the use of this technology is still unclear and uncharted. And without regulatory agency approval, a potentially lucrative clinical market for molecular diagnostics remains essentially untappable.
After an intensive workshop at the Wardman Park Marriott involving agency officials and scientists interacting with those from academia and commercial biopharma, Larry Lesko, director of the FDA’s office of clinical pharmacology and biopharmaceutics, and one of the agency’s leaders in the effort to enable the inclusion of genomic data in the regulatory process, said the agency would rewrite some of the draft guidance to reflect the comments provided by dozens of the approximately 500 people from across the world in attendance at the event.
“We recognize from these two days that there are many areas of this guidance that can be improved for greater clarity,” Lesko said in wrap-up remarks Friday. “I think we can do it, and we are going to do it.”
The draft guidance, which the FDA’s Center for Drug Evaluation and Research originally said would be ready for public comment in the summer, is designed to generally instruct industry about when and how pharmacogenomics data should be submitted with certain new drug applications, investigational new drug applications, and biological license applications. It is available for viewing online here.
The Unmentioned Guest
While the two-day meeting, organized by the non-profit Drug Information Association in collaboration with the FDA, was called “Pharmacogenomics in Drug Development and Regulatory Decision-making,” it also was about microarrays, which are producing the tidal wave of information that many say is at the root of this activity.
The microarray industry was well represented at the workshop, at least as observed by a scan of the attendees list: Affymetrix and Agilent Technologies, the top two manufacturers of preprinted arrays, sent representatives; as did Applied Biosystems, the sequencing technology giant which in July announced its entry into the arena with a new microarray system for conducting gene-expression analysis. Also present were representatives from Nanogen, the San Diego maker of an electronic microarray platform, and Vialogy, a Pasadena, Calif.-based startup that is commercializing a microarray analysis platform.
The effort that brought about this summit meeting of the globe’s pharmacogenomics leaders began 18 months ago at a much smaller workshop. Since then, the industry, the science, and the technology developing under the umbrella term “microarray” — and the published data it is producing — have ex-ploded, offering the promise of more knowledge about human health and tanalizing benefits that knowledge may bring to healthcare.
“I came to this meeting with hope, and I leave it with even more hope,” said Donna Mendrick, vice president for toxicogenomics for Gene Logic. “A year and a half ago, the discussion was: ‘This technology is unreliable, we don’t know how to use it.’ Now, we are that the point where now you can use it.”
So, while this meeting did not address the much-discussed shortcomings in microarray technology — issues of standards, reproducibility, reliability, cost, and a lack of cross-platform concordance, to mention a few — what was at issue was how to create a way for the scientifically valid data to enter the clinical regulatory arena.
“The microarray has more prediction power than genotyping,” said Gualberto Ruaño, president of Genomas of New Haven, Conn. “Because of that predictive power, it’s the major reason for this meeting. We would not have this meeting if the data coming from microarrays had not been so important. It’s a new biomarker, but people are not comfortable with it yet — they want things systematized and formalized. There is a ways to go before it achieves that.”
The opening session of the workshop ended with a PowerPoint slide reading — “Let the chips fall where they may.” And, did they ever.
After opening speeches by Mark McClellan, the FDA commissioner, and Janet Woodcock, the head of FDA’s CDER, workshop attendees divided into three groups — pre-clinical, pharma clinical, and clinical — to dissect the guidance document according to the needs of their groups.
The squeakiest wheel was the pre-clinical group, where in two separate sessions, each over 60 minutes, the discussion and the comments centered around voluntary submissions of data — how much data should a sponsor of a new drug provide to the FDA, when should it be provided, at what point is it required, and what is not required? In a show of hands, only one attendee indicated an interest in providing the FDA with a complete data submission, instead of a summary.
“It was clear to me that the level of concern differed between the non-clinical and pharma-clinical groups, “ Lesko said. “I think that is related to the proximity of this data relative to drug development — in early drug development, pharmacogenomics data may be prominent in the overall data set that is available.”
The questioners were concerned whether the agency, seeing a trend amongst the data sets submitted, might use it in a regulatory fashion. Or, they wanted to know more about the interdisciplinary working group that will be set up to help the agency deal with submissions.
“There needs to be more clarity in what the FDA is going to do with the data in voluntary submissions,” said Lesko. And, what is the interdisciplinary pharmacogenomics working group going to do? “We have not written a job description for this group, we might do that fairly soon,” he said.
The agency, Lesko said, in December plans to start recruiting staff “appropriate for the number of submissions that come in,” he said. Judging from industry comments on its willingness to submit data under this guidance, it may be a small team to begin with.
The group would be the key player in the data submissions area, shepherding the data submitted as part of an IND application or New drug Application, and helping guide regulatory decisions.
“Right now, pharmacogenomics has amounted to an explosion of information,” said McClellan. “There are a lot of results, those chips falling where they may. But as of yet, there is not much knowledge on exactly what this all means, or whether a treatment is going to be safe or going to be effective in particular kinds of patients.”
The key to turning that data into knowledge might be held within two phrases presented in the draft guidance — that of a known valid biomarker, and that of a probable biomarker. Known valid biomarkers must be submitted with an IND, an NDA, BLA, or supplement. Probable valid biomarkers do not need to be submitted, if not used in the sponsor’s decision-making process, but may be voluntarily submitted. Exploratory pharmacogenomic data can be voluntarily submitted.
Valid biomarkers can be measured in an analytical test system with well-established performance characteristics, the guidance reads, and there is an established scientific framework that elucidates the significance of test results, with CYP450 2D6 and thiopurine methyltransferase given as examples well understood in the scientific community. Probable valid biomarkers appear to have predictive value for clinical outcomes, but may not yet be widely accepted or have been independently replicated, as in the case when data is sufficient to establish a significant association between a pharmacogenomics test and clinical tests, but is not published and perhaps is only known to a drug sponsor.
“I think you should realize that what we are trying to do here within this guidance is to establish a kind of general framework wherein new science or new biomarkers can move into the regulatory stage,” Woodcock said. “Right now the subject is pharmacogenomics. The scope is pharmacogenetics or pharmacogenomics tests and results. So anything that has interaction between the genome [or] gene expression, is within the scope of pharmacogenomics.”
For Woodcock, much of the pharmacogenomics data currently available is not “well enough established scientifically to be suitable for regulatory decision making.”
The two terms create a “threshold” for scientific data that is ready for regulatory decision making.
Valid biomarkers were a term “made up for this guidance,” she said, while probable biomarkers “recognize the dynamic nature of the science and the resolving nature of this field.”
It’s About the Data
The questions, the doubt, and perhaps even the fear, that sharing data seems to incite in industry will one day go away, Lesko told BioArray News, giving a hypothetical example of what might spur the change.
“Everybody is talking about things that they haven’t experienced yet,” he said. “Let’s say a company has a microarray that is going to predict which patient will respond to a breast cancer drug. They are going to advance that in drug development and submit it. We are going to get over this hurdle as soon as we have some examples. Everybody is looking at these things, but we haven’t seen them yet only because of the time frame. Five years from now, we will talking about things a lot differently. It’s going to come down to push-button technology. I’ve seen prototypes — they’re going to have boxes in doctor’s offices to look at a gene array to pick a patient for a drug. We [FDA] have to regulate this, or the public will lose confidence in this stuff.”
The agency plans to publish the final pharmacogenomics data submission document within 90 days after the Nov. 3 publication date of the suggested guidance.