NEW YORK (GenomeWeb) – Researchers and industry stakeholders last week advised the US Food and Drug Administration as to how the agency might regulate next-generation sequencing-based tests without hindering their ability to improve the technology and the diagnostics.
The FDA held a public meeting on Friday to gather public input on novel regulatory strategies for overseeing NGS platforms and tests. With its clearance in 2013 of Illumina's MiSeqDx platform and two cystic fibrosis tests that run on that platform, the FDA brought to the meeting some ideas about the types of approaches that might address the unique challenges of NGS.
To clear the Illumina platform, the FDA looked at a subset of variants assessed by the test to provide "reasonable assurance" that it would be able to accurately gauge genomic variants of interest. In order to clear the two CF assays, the agency used Johns Hopkins' curated CF database of variants to establish their clinical performance.
"We felt this was a reasonable approach to take but there are going to be future challenges as we move to whole-genome [sequencing,]" David Litwack, a member of the personalized medicine staff at FDA's Office of In Vitro Diagnostics and Radiological Health, said at the meeting. The FDA, he said, wants to create a regulatory framework for NGS tests that considers the current state of the technology but is flexible enough to allow innovation to happen in the future.
"We're clearly moving toward whole genomes and some people are envisioning a world … when everybody will be sequenced at birth and [that data will be] deposited in the medical record and used for multiple things throughout their lifetime," Litwack added. "Can we design something that can last into the future?"
One of the ideas the FDA is considering is a certification process through which an external body can develop analytical validation standards that NGS manufacturers and labs performing such tests will have to meet. The agency is also considering developing software that can verify the quality of an individual NGS run, and establishing systems to validate NGS software. Using funds from the $10 million it is slated to receive under the President's Precision Medicine Initiative, the FDA said it could develop this software and make it available for free to the genomics community.
The FDA is also hoping to spend a portion of the Precision Medicine Initiative money to enable labs to establish the clinical validation of NGS tests using well-curated, "regulatory grade" databases of genetic variants associated with disease – not unlike the Johns Hopkins' CFTR2 database used to clear the 139-gene Illumina CF test.
A flexible approach
The value of NGS is its scalability for use in different experiments and clinical contexts. But the flexibility of the technology makes it challenging to create a regulatory framework since FDA traditionally regulates based on the intended use of a device and the risk that it poses to the public's health. But the intended use of an NGS test, and thereby its risk, may be hard to peg down because a single test can be used simultaneously for a range of diseases and in different clinical contexts.
"The real complication comes down to this concept of intended use," Neil Risch, director of the Institute for Human Genetics at the University of California, San Francisco, said at the meeting. "In our newborn screening we're doing other disorders but we could be getting cystic fibrosis information, too. [FDA has] already approved that for NGS. So, then, should we be allowed to report that out?"
In clearing the MiSeqDx platform, the FDA designated it a Class II device, and said that other labs with tests with the same intended use could complete the requirements the agency has laid out as "special controls" for this type of device, and market those without making a premarket submission. Thermo Fisher Scientific last year said it had completed the listing of the Ion PGM Dx NGS system with the FDA as a Class II device.
Meanwhile, the FDA-cleared Illumina CF tests – the clinical sequencing assay and the 139-variant assay – run on the same MiSeqDx platform, but have markedly different FDA-cleared intended uses. In clearing the Cystic Fibrosis Clinical Sequencing Assay, which sequences the entire CFTR gene, the FDA labeled the test as a tool that can be "used as an aid in the diagnosis of individuals with suspected cystic fibrosis," particularly when they have an atypical presentation of the disease and other tests have failed to identify causative mutations. Meanwhile, the FDA said that the Illumina MiSeqDx Cystic Fibrosis 139-Variant Assay may be used for carrier screening in adults of reproductive age, for confirmatory diagnostic testing of newborns and children, and as the first-line test to help diagnose those suspected of having CF.
"The conceptual issue that needs to be considered when you start thinking about regulating NGS tests … has to be the intended use," John Pfeiffer of Washington University School of Medicine said at the meeting. "You can have all these other metrics that are the same and still have tests that have profoundly different clinical utility in different clinical settings."
The establishment of a uniform framework to certify the analytical validity of NGS tests is further complicated by the different library preparation methods and bioinformatics used by labs. These varying processes means that two labs using the same NGS platform could have varying capabilities to identify the four classes of genetic variants – single nucleotide variants (SNVs), deletions and insertions (indels), copy number changes, and structural variants. "Two different laboratories may show that their tests have the same metrics in terms of sensitivity and specificity, but the two tests now are fundamentally different, in terms of the breadth of the mutations that they can find and hence their clinical utility or how useful they are for a specific intended clinical use," Pfeiffer said.
From a regulatory standpoint, Pfeiffer proposed the need for a "gold standard" in NGS testing. "Most of us who do next-generation sequencing are pretty convinced that our tests are more sensitive and more specific when they are optimized than Sanger sequencing," he said. "So, as we move forward on this, what is the gold standard we're going to use to evaluate whether or not the metrics and test results we're getting are actually correct?"
While speakers at the meeting felt that it was feasible to advance software to establish the analytical performance of different NGS tests for various clinical uses, others pointed out the need for benchmark standards. "You can argue about what percentage of the genome you can accurately call for what type of variants and that's going to continually be a moving target but you still need your benchmark datasets that you compare to get your accuracy," said Deanna Church, who was previously part of a National Center for Biotechnology Informationgroup that contributed to the Genome Reference Consortium, but now is at the genome interpretation services shop Personalis.
The development of benchmark standards are a "much more difficult issue to address than the software." she said. "For any test – panels, exomes, and genomes – we should be able to define the intervals by which we can accurately identify variants of a specific type at a specific allelic depth."
In developing the analytical performance standards, speakers emphasized the need for FDA to be agile and flexible, and bring in different groups to address standards in different areas, such as the College of American Pathologists, National Institute of Standards and Technology, American College of Medical Genetics and Genomics, and Association for Molecular Pathology.
Karl Voelkerding provided insights from what his workgroup at CAP has done to establish performance standards within labs performing NGS tests. In 2012 the workgroup published a checklist for accrediting labs doing NGS clinical testing that addresses library creation and sample prep, the actual sequencing, bioinformatics, variant calling, annotation, and the final patient report.
At the public meeting, Voelkerding, medical director of genomics and bioinformatics at ARUP Laboratories and head of the working group that developed the checklist, explained that the CAP accreditation system operates on a methods-based process where the specimens labs use for validation represent the spectrum of variants a specific test is designed to detect. "If the test is designed to detect SNVs and indels, you need to use a representative spectrum of samples to achieve that during your validation," he said.
CAP has also developed a program for NGS proficiency testing. The organization has created a consensus set of variants at 200 chromosomal positions. CAP then mails labs 10 µg of DNA so they can test for these chromosomal positions and establish the proficiency of their lab. So far, 130 labs have signed up to participate in this proficiency testing program, Voelkerding said. Currently, the proficiency testing effort is only for germline variants, but CAP is planning to roll out a similar program for somatic variants.
A light touch
Perhaps the main difference between this meeting on NGS and the public gathering held last month to discuss FDA's overarching regulatory policies for lab-developed tests, is that the agency acknowledged outright last week that its current system is ill-suited for the rapid changes in sequencing technologies and the constantly evolving genomics knowledge base.
"One of the things we're concerned about here at the FDA is that we not be a barrier to the rapid ability to use clinically valid information," Elizabeth Mansfield, deputy office director for personalized medicine at FDA’s Center for Devices and Radiological Health, said at the meeting. Industry players have told the agency, she said, that FDA review takes time and because of that sponsors are reluctant to update the technology, which "freezes" the device for a time.
Mansfield probed meeting participants to talk about ways in which knowledge from research (i.e. every time an important gene-disease association is published) can be quickly incorporated into NGS tests. She suggested FDA's desire to create a framework that would impart the "lightest regulatory touch possible," without creating "the wild west."
Victor Velculescu of Johns Hopkins University School of Medicine suggested that if FDA is thinking of developing an analytical accreditation system through which labs can be certified to run tests, the initial certification standards must be high. "Once someone passes that bar, obviously they were smart enough to develop this test and do this well, they can add another gene," he said.
Marc Salit, who leads a group at the National Institute of Standards and Technology that addresses issues in microarray measurement science, expressed that common performance metrics can be a launching pad for technological innovation. The accreditation system should provide assurance that a lab can perform a certain kind of test, but it should also be able to certify that it can develop and enhance a test.
As an example of a "light" regulatory approach, Girish Putcha, from Medicare contractor Palmetto, highlighted the example of the New York State Department of Health, which allows labs that have secured a license to submit materials for tests and processes and operate with provisional approval until state regulators review them.
However, Barbara Zehnbauer of the US Centers for Disease Control and Prevention pointed out that even if FDA uses an accreditation body and a certification process to make regulations less onerous, there still needs to be a way of gauging what adding a new gene or set of genes to a test means for the clinical use of the test.
Opportunity for something big
For establishing the clinical performance of specific NGS tests, FDA is keen to use databases like the CFTR2 database used to gauge validity of the MiSeqDx Cystic Fibrosis 139-Variant Assay. The agency is particularly interested in partnering with the NIH around ClinGen, a public, annotated database of variants across the human genome using standardized classification methods.
According to Heidi Rehm of Harvard Partners, so far 200 people from 75 institutions are participating in the project and 77,000 unique variants are included in the database from 200 submitters. Rehm two years ago received an NIH grant to collect variant data from labs and clinics for the ClinGen project, and develop standard formats for submitting the data to a centralized repository called ClinVar.
Rehm explained that ClinGen doesn't curate the data, but includes the interpretation of the submitters. However, the project does have a process for cleaning up the variant submissions, a starring system for the relative strength of the interpretation from a submitter, and opportunities for resolving disagreements in interpretation between groups of submitters.
"You may put in your favorite variant and figure out that the interpretation overall is conflicting between the submitters," Rehm said. The database allows the public to see what each submitter says about a variant and why they say it. "It's really a way to be transparent with the community," she added.
The ClinGen starring system tells the public which interpretations are more trustworthy. Variants published as practice guidelines get four stars; groups that get approval from ClinGen for their method of variant assessment for markers in different disease areas get three stars, and submissions with multi-source agreement receive two stars. Submissions from a single source receive a single star if the group is willing to provide ClinGen information about how it classifies variants and submit to a comprehensive review. Groups that don't agree to this get no stars for their submissions.
However, only 12 percent of the variants in ClinVar are submitted by two or more submitters and therefore allow for comparisons. Of these, 21 percent are interpreted differently.
Rehm warned that with the availability of a large database, there is a tendency that users will over-rely on it, not understanding when the data isn't recent or well-curated. In the past, variant databases have not been updated regularly due to limited funding or researchers haven't developed consistent evidence-based standards for variant interpretation. As such, the research community has stopped using some of these resources.
ClinGen is trying to avoid that fate. The project participants are forming expert panels focused on variants for specific disease areas or indications, such as for cardiovascular disease, pharmacogenetics, and germline cancer. Rehm added that in the future a database like ClinVar should be tied to an electronic interface so that patients' records can be updated and physicians will be alerted as knowledge about variants evolves.
Gary Cutting of Johns Hopkins University School of Medicine emphasized the need to bring in data from the community setting, and include clinics and patients themselves. "The patients have the phenotypes and they also have the genetic information given to them by the laboratory," he said, adding that any database effort would also need to involve international partners to capture markers that are prevalent in patients outside the US. JHU's CFTR2 database, for example, involves entities from 55 countries.
Despite the challenges presented by building a "regulatory-grade" database, the time may be just right for such an undertaking, some pointed out. "There's a great opportunity right now in this special moment in time to bring the field together around something big and visionary and durable," said Levi Garraway of the Dana Farber Cancer Institute.
Of course, there is widespread dissension within the lab community at large about whether the FDA has the statutory authority to regulate lab-developed tests at all, regardless of whether they use NGS or some other technology platform. Representatives from AMP and the American Clinical Laboratory Association made statements at the meeting last week reminding participants of that baseline discord.