NEW YORK (GenomeWeb) – Ahead of a two-day public workshop on regulating next-generation sequencing tests, the US Food and Drug Administration has published discussion papers outlining the approaches it is considering and has asked stakeholders for feedback on specific questions.
Earlier this year, the agency held a workshop to discuss its plan to develop analytical standards for ensuring NGS test results are accurate and reliable, and to use curated variant databases to establish the clinical interpretation of such tests. Based on the public feedback, the FDA has refined its regulatory considerations and is asking for more specific guidance from the community.
On Nov. 12, the FDA workshop will focus on developing analytical standards for NGS testing, and on Nov. 13 the agency will discuss variant databases for clinical interpretation. Stakeholders can provide input at these workshops or in writing.
Given the breadth of markers that a single NGS test can analyze at once, the FDA recognizes that it cannot review performance data for each analyte as it currently does for tests. As such, the FDA is considering advancing predefined performance standards, or creating a so-called design concept standard, or a hybrid of the two approaches.
"Compliance with standards could substitute for premarket clearance or approval by FDA of each individual test," the agency wrote in the discussion document. The design concept standard would be more flexible, while performance standards would be more prescriptive.
"The first relies heavily on assessment that developers know how to successfully develop quality tests, but does not necessarily involve FDA premarket review of each test developed and validated," the agency explained in weighing the benefits of each approach.
The agency or a third party would lay out "principles" of test design for the developer to follow, for example, by establishing the test's clinical purpose, the specimen types needed, the limitation of the sequencing instrument, interfering substances that might impact amplification or sequencing, and the reference genome used. By considering the overall design of the test, at the end of this process, "developers should be able to consistently generate high-quality NGS-based tests," the agency said.
Under the second option, the FDA or a third party would develop specific metrics and performance specifications that an NGS test has to meet, which "may allow less flexibility to accommodate changes in technology." The agency expects it would develop specific standards for different clinical indications — whether the test is for diagnosing a symptomatic population, treatment selection, or screening asymptomatic people.
A number of groups are currently working on standards for NGS tests. The non-profit MED-C today launched a lung cancer registry within which it aims to advance analytical metrics for NGS panels (see related story).
As NGS is used more readily in clinical settings, exceedingly rare markers will be identified, but researchers cannot use traditional research models to study whether these markers are benign or cause disease. Recognizing this, two years ago, the FDA used Johns Hopkins University's CFTR2 database — which contains most of the known variants observed in patients — to clear Illumina's MiSeq Cystic Fibrosis 139 Variant Assay.
The agency envisions that other NGS tests can also come to market with the help of similar databases. In a second discussion document, the FDA said it would like to discuss at the workshop the quality metrics that "well-curated" variant databases must meet in order to be used as sources of clinical evidence for NGS tests.
In order to establish such a database, the FDA said developers would need to ensure, for example, that standard operating procedures are reviewed regularly, consistent nomenclatures are developed, software analysis tools are validated, and personnel with expertise in variant curation are hired.
After the FDA greenlights a database as meeting such metrics, developers would use the information in them as part of premarket submissions for tests in lieu of having to generate evidence on each variant through clinical studies. After the agency approves a test, labs can also use these databases to interpret what the markers mean in the context of disease. For CFTR2, researchers conducted functional and clinical studies to determine the pathogenicity of the CF markers.
The agency envisions that metrics for curated databases could ultimately enable a process by which different stakeholders contribute to the evidence base on the variants. Some in the field have criticized existing public variant databases for being poorly maintained and for containing inaccuracies.
"Over time, interpretation is likely to become more standardized than in current practice," the FDA wrote in the discussion paper. The NIH is building a public variant database, called ClinVar, in which researchers are developing standard methods for submitting variants data and determining their clinical relevance.
This article has been updatd to correctly note that public variant database effort at NIH is called ClinVar.