By Tony Fong
A consortium of researchers that includes representatives from the National Cancer Institute and the US Food and Drug Administration has published two mock submission guides designed to help researchers in the translational proteomics community submit for regulatory approval protein-based in vitro diagnostic multivariate index assays.
The two mock submissions are described in a study in the Dec. 10, 2009, online edition of Clinical Chemistry. They cover two platforms: a multiplex immunoaffinity mass-spec platform for protein quantification, which the authors dubbed PepCa10; and an immunological array platform for glycoprotein isoform quantification, which they called SDIA.
The authors said they created the resource for the translational proteomics community to address a general lack of knowledge about the regulatory approval environment for protein in vitro diagnostics.
While the IVD industry may be familiar with the FDA approval process, the translational proteomics community may be ignorant of the "intricate regulatory issues involved in moving a newly developed test to market," the authors said. "This complex process can appear particularly daunting when new technology," such as mass spec-based proteomics, is used.
The submission exercise was aimed at clarifying the FDA's process on submissions for diagnostics based on multiple protein markers, Fred Regnier, who authored the mock submissions for the SDIA platform, told ProteoMonitor this week.
Not many people have made such submissions so far, and both the NCI and FDA were "disappointed more people hadn't," said Regnier, a professor of analytical chemistry at Purdue University. "I, like most academic researchers and the FDA and the federal granting agencies, [am] concerned that it takes so long to move proteomics technologies to the clinic."
Also, because development of protein-based multianalyte tests is still in its infancy, "there haven't been many mass spec-based assays actually brought to the FDA previously, so they didn't really have a need or an opportunity to look at this technology," said Leigh Anderson, founder and president of the Plasma Proteome Institute and the author of the mock submission for PepCA10.
Indeed, Vermillion's OVA1 ovarian cancer tumor-triage test last September became the first protein-based IVDMIA to receive 510(k) clearance from the FDA.
According to Dan Chan, director for the Center for Biomarker Discovery at Johns Hopkins University and one of the original developers of the five protein biomarkers that make up OVA1, industry observers repeatedly told him that the test was destined for FDA rejection, partly because of the FDA's inexperience evaluating such tests [See PM 12/11/09].
In a meeting held by the American Association for Clinical Chemistry in the fall, Zivana Tezak, a scientific reviewer at the FDA, said that one stumbling block to the protein IVDMIA pipeline has been that researchers and the FDA have often worked at different purposes. While researchers may be interested in the molecular mechanisms of a biomarker, the FDA wants to see that the marker does what a submission claims it does, and that it does so safely.
"It's not enough to understand the biology," Tezak told the audience.
And because the FDA approval process is cloaked in confidentiality, trying to get at how a particular diagnostic is developed and how the agency evaluates a test is close to impossible.
Against this backdrop, the NCI-FDA Interagency Oncology Task Force subcommittee on molecular diagnostics held a workshop in October 2008 to identify the "analytical validation requirements" that proteomics-based researchers, in particular those using mass specs and affinity array assays, may need to satisfy for regulatory approval.
Case studies were presented at the one-day workshop, which evaluated six devices, including two cleared by the FDA: Agendia's MammaPrint gene expression-based microarray test, and an MS/MS-based test system for screening metabolic disorders in newborns.
[ pagebreak ]
Four devices under early development were also presented and discussed "to provide a framework for understanding the complexity and the extensive variety of platforms that are being envisioned for clinical use," researchers said in a separate study, also published online Dec. 10, 2009, in Clinical Chemistry.
These included a protein-based biomarker assay to identify individuals with an increased risk for colon cancer; an MRM-MS assay of five candidate biomarkers for the early detection of preeclampsia; an assay of blood-based enzymatic activity as a biomarker for cancer; and a platform comprising immunological assays that use interferometry or laser-induced fluorescence to help enable the translation of biomarker validation studies and disease-specific diagnostic assays on individual plasma samples.
Among the lessons that grew out of the workshop was that an intended use needs to be clearly identified before clinical studies are initiated "so that appropriate data are generated to support that use." Also, a study design and a plan for statistical analysis of the data are essential, and, although the FDA may not be concerned about the methods used to generate candidate biomarkers, "a mechanistic understanding of the role of the analytes in health and disease may ease the validation and review process."
The participants from the workshop also recommended that the NCI's Clinical Proteomics Technology Assessment for Cancer consortium, comprising five teams of researchers, create mock 510(k) documents for submission to gain guidance on the regulatory approval process and to familiarize the FDA with the technology used to develop protein IVDMIAs.
Bringing All Parties to the Table
According to Anderson, the mock submissions helped to build a "conceptual bridge" between those who develop biomarkers and the FDA. They also put them in touch with a third group of stakeholders: diagnostic companies "who take biomarkers and then actually translate them into commercially viable products." In that way, the mock submission exercise began to create a dialogue among all the players, he said.
In their Clinical Chemistry article, Anderson and his colleagues emphasized that the mock submissions, which contained fictitious data, only emulate actual submissions to the agency for premarket clearance or approval, and, as such, are less detailed than a real submission. Still, participants in the exercise said it was an eye-opening experience.
Purdue's Regnier said that before the exercise, he had no idea how labor-intensive the FDA approval process is. "I was really amazed [at] the amount of effort to do this," he said, adding that had this been a real 510(k) submission, it probably would have been even more work and taken twice as long as the six months he spent doing the exercise.
In particular, Regnier said, he "underappreciated" the challenge of validating multiple markers on one test. "As the number of markers goes up, there is more than a linear increase in difficulty in validating all those markers. It's a tremendous job," he said.
One area of improvement that was identified by the study was a need for better antibodies for protein assay systems, especially because the act of capturing and enriching analytes will likely remain a major feature of protein assay systems. As the authors wrote in the study, "no technology on the horizon appears to be capable of displacing antibodies and other high-specificity binding proteins as the capture agents of choice."
While antibodies can capture specific proteins of interest, they also bind to other proteins that are not of interest in the capture process. Regulatory agencies, the authors said, may in the future require information about which proteins and peptides are captured during an assay and how non-analytes affect the measurement of specific analytes.
That may mean having to validate all those reagents, "especially if you're doing antibody selection or a complete antibody system," Regnier said. To do so, a researcher or diagnostic developer would have to validate what each antibody is capturing, going through each lot to assess the quality of the antibodies.
"To me, it was a tremendously difficult process as you start to look at 10 to 50 markers," Regnier said.
Before he did the mock submission, he did not see why it would not be desirable to develop a test using 20 or 30 markers, but as he went through the process, he "began to understand very quickly why you wouldn't want to do that." In addition to the amount of work required, it would also be extremely expensive to validate and then manufacture such a test.
"As you have to go through and QC the test as you're manufacturing it, it makes it much more expensive," he added. One solution would be to effectively frontload the discovery-to-clinic pipeline so that the heavy lifting is done in the discovery and verification stage so that less work would be done in the validation step, he said.
[ pagebreak ]
That would mean trying to find a small panel of initial markers, perhaps 30, that show promise as candidate markers for a specific disease. Then by doing tests on a small set of patient samples, winnow down the number of markers to "the smallest set you can" that best can indicate whatever disease and diagnosis is being investigated.
Only then should validation be done, Regnier said. "Knowing that validating 30 markers across large numbers of patients and then having to go into a manufacturing process involving all those would be much, much more expensive than doing six, or eight, or 10 markers, we would try to see if in fact smaller numbers of markers can give us the same answer," he said.
The study also pointed to assay analytical specificity as an area of concern. Alternative splicing, post-translational modification, and other forms of protein modifications can result in protein species that have similar structures but different biological outcomes.
While developing a test that can measure such variations will increase analytical specificity, and thus utility as a diagnostic, doing so will be a daunting challenge if the FDA recommends or insists that data on which protein isoforms have true diagnostic abilities be submitted.
Pointing to protein-specific antigen and CA125, Regnier said that there are many glycoforms of those proteins that would probably serve as better markers of disease than the whole protein, but "how one differentiates between these 10 to 20 to 50 isoforms of a particular protein is going to be a really difficult thing, especially if you've got to do that for a whole panel of 20 proteins."
The current technology is not up to the task of doing the job in a quick and cost-effective way, he said.
Good, Not Amusing
Anderson said the main lesson he got is "that all of these seemingly laborious and not particularly amusing things that need to be done for FDA approval turn out to be really good things to have done if you want to prove that your assay really works well," he said.
That includes "developing the discipline" to show that the assays are reproducible across different laboratories. It also means paying more attention to the statistical analysis of the data coming off the mass spec.
"Getting a really good understanding of all of the elements that can cause potential interferences or other problems in the assay is something that's necessary from an FDA point of view, [but] often in the biomarker world, we've often felt that it's not absolutely necessary to go into all the hairy details of that," Anderson said.
One issue identified in the mock submission paper is the fact that though some FDA-cleared tests use mass specs for drug and metabolite monitoring, no mass spec platform currently used for proteomics research has been approved by the agency for clinical laboratories for use with proteomics assays. Specifically, triple-quadrupoles and their software — the basis for the PepCa10 — are not manufactured under good manufacturing practices and so do not comply with the quality system regulation "required by the FDA for diagnostic assays and instruments to assure consistent performance in clinical use," according to the study.
Anderson said two reasons account for this. One is the sheer cost of manufacturing instruments that comply with GMP guidelines. The second is that once an instrument is registered as complying with those guidelines, it becomes very difficult to make changes to the platform.
Because "this is a period during which major improvements are being made frequently in mass spectrometer design, there's a natural desire to commit to making a frozen FDA-approvable clinical device as late as you can to incorporate the most advanced features," Anderson said.
During the mock submission process, Anderson spoke with Life Technologies' Applied Biosystems unit, Waters, Agilent Technologies, and Thermo Fisher Scientific, which were all interested in seeing what FDA's input and comments would be "because that assists them in their planning about commercialization of an FDA-approvable device," he said.
"The secondary benefit is that since all of the existing filings for such devices with the FDA are confidential, the manufacturers don't really get to see what [other companies have] proposed or how they manage their interactions with the FDA," he added. Seeing a "generic" submission "and seeing the FDA's reaction to it is just valuable to them in anticipating how they're going to approach the FDA."
The FDA did not comment for this story, but Anderson said he hopes that what the agency gets from the experience is "a lot better understanding of the science behind what we're trying to bring to them down the road. I was really surprised at the level of inquiry and thought that they put into it.
"And that just indicated to me that they were truly interested in trying to understand this, and not only in the sense of trying to figure out where this technology fits in terms of all the existing procedural boxes but what might be better to regulate differently for these kinds of assays than for existing ones," he said.