Skip to main content
Premium Trial:

Request an Annual Quote

Q&A: FDA's Elizabeth Mansfield Addresses Concerns After Release of Draft Guidances on NGS Testing

Premium

NEW YORK (GenomeWeb) – Following the US Food and Drug Administration's recent release of two draft guidances on NGS testing, members of the clinical lab community were immediately suspicious that the move was a precursor to the agency's impending regulation of laboratory-developed tests. 

The agency has denied this, and presented the NGS draft guidelines as a voluntary framework that represents good principles for designing, developing, and validating such tests which developers can choose to adopt whether they are advancing NGS tests as LDTs or as kits.

However, stakeholders have also raised questions about specific recommendations in the two draft guidances one on establishing the analytical validity of NGS germline tests and another describing a process for recognizing public genetic variant databases as a regulatory resource for demonstrating clinical validity of NGS tests.

They wondered, for example, whether the minimum analytical thresholds proposed by the agency were realistic and how the agency had come up with the numbers. And without incentives to do so, would owners of variant databases put in the work to meet FDA's specifications for database recognition, and would labs even submit to them?

In an interview with GenomeWeb, Elizabeth Mansfield, director for personalized medicine and molecular genetics at the Office of In Vitro Diagnostics and Radiological Health within FDA's device division, said that whether stakeholders believe it or not, there is no conspiracy afoot. “We're highly aware of the potential for innovation. We're highly aware of the fact that new information is just flooding out of research,” Mansfield said. And we want to make that as accessible as possible to patients, as soon as possible. But we have this duty to protect the public health and look at safety and effectiveness. What we're really aiming to do is shoot for the best system we can and the most efficient system we can.”

Below is an edited transcript of the interview.


The FDA has said that the recommendations in these NGS draft guidelines are voluntary and separate from the LDT draft guidance. What are the advantages to companies that voluntarily come in for FDA review? Why should they do it?

That's up to each individual lab to why they would want to do that. Maybe nobody wants to. I don't know. But we're putting these out as what we believe are the best practices, the activities that need to be carried out to design, develop, and validate a test. It doesn't require anyone to come into FDA. It provides a useful set of recommendations that even if you don't come into FDA, might make some sense to you and your laboratory.

So, I don't know why labs would want to come in voluntarily. We've certainly had some people come in with their LDTs. I guess it's a business decision.

There were certainly a lot of stakeholders present at the public meetings on NGS testing that formed the basis for these guidelines. Did you get the sense through those meetings that there were labs that would voluntarily come in?

I don't know that we really thought about it at the time. We were just trying to get feedback on the ideas that we were trying to put forward and to see what potential stakeholders had to say about what we wanted to propose.

In the analytical validity guidance, FDA discusses the possibility of classifying NGS germline tests as a Class II device and notes that the risks associated with such tests might be mitigated by general or special controls via the de novo process. But the recommendations don't focus on why FDA considered NGS germline tests Class II and why FDA believes that using special controls, sponsors could potentially gain 510(k) exemption for such tests. There's not much detail in the draft guidance defining actual risk classifications in the context of NGS tests. Can you provide some specifics on how you would classify germline NGS tests and why you would classify them as moderate risk?

Regardless of what we said in the guidance about thinking these might be Class II, we actually haven't classified this type of test yet, because we haven't yet seen one for classification. What we said in the guidance was based on our understanding of the risk of these kinds of tests to patients or users, because that's how we do classification generally. For in vitro diagnostics, we mostly consider the risk of incorrect results. If you get the result wrong, what will happen to the patient if the physician believes that result and does whatever the normal practice is.

This type of test, we believe, is generally run on people who already have a phenotype that makes it clear that there is something different or wrong with them. Often in that scenario, we don't believe the risk is necessarily as high, if it's not to make a treatment decision, if you're just trying to tell someone what the genetic basis for their disease is. So, that's how we came to the possibly Class II decision, but of course, we'd have to actually see one and classify the test based on what we actually thought the true risks were.

The exemption is something that we can do when we can create special controls and understand the evidence well enough that we believe that premarket review is not required. The exemption always follows the classification. You can't do both at the same time. You have to do the classification first, and decide you can exempt.

Is FDA going to issue a separate guidance or add clarity through another mechanism about the classifications for NGS tests specifically, or do you think it would be part of the broader classification work that you're going to be doing with LDTs? [Editor's Note: FDA has said it will issue separate guidance on risk classifications for LDTs.]

We don't think the risk of NGS tests are remarkably different than they are for other types of tests. So, I would say that no, we don't plan on issuing a guidance specifically for classification of NGS tests, and that we would likely continue to classify them the way we have over the last 40 years.

In the analytical validity draft guidance, the FDA states that it is unaware of any other standards that evaluate the performance of NGS tests in a way that ensures their safety and effectiveness adequately. When I reached out to FDA for an explanation of this, you guys responded that some of the other standards by the College of American Pathologists and the New York State Department of Health don't address product development. Lab directors I've spoken to take issue with that and think that some of these other guidelines are sufficient. Can you lay out what, in your view, those other standards don't address in terms of analytical validity, and what FDA hopes to address?

Let me start off by saying that the recommendations made by these professional bodies and the New York State Department of Health are not standards. They're recommendations. Standards are developed by standards-development organizations and they represent a consensus statement where all stakeholders have a voice. Although these recommendations are available, they are not standards and we can't recognize them as standards.

We did of course look at CAP, CLIA, New York State, and the American College of Medical Genetics and Genomics for what their recommendations were. To some degree the recommendations, and it varies according to who made them and when, address laboratory operational issues, which for us are interesting but not our area of authority. We don't believe that any one of these sets of recommendations alone are sufficient for our purposes to assure that a test would be safe and effective under our particular standard. Most of them don't really address, as we said before, the issue of design and development. That's an area that's important to FDA.

These guidances were not written to say 'this only applies to laboratory-developed tests.' These apply to all the developers of NGS tests … which can include different kinds of manufacturers that are not labs, and therefore the CAP, ACMG, CLIA, and New York State recommendations wouldn't even be applicable to them.

Other than the product development areas or design control areas, are there any other aspects that these other recommendations don't touch on that FDA would like to address when test makers come through the agency?

I can't tell you off hand specifics about what's in each of these recommendations and what we think we have to add to them. But in general, we think that what we are proposing is more suitable for FDA-style oversight than the other recommendations.

The analytical validity guidance does get pretty specific in certain areas, for example in the minimum analytical performance thresholds. Some stakeholders thought those thresholds set a high bar, particularly for positive percent agreement, negative percent agreement, and technical positive predictive value, and maybe that these weren't realistic expectations or achievable for all variant types. There were also concerns about specifications for average coverage. How did FDA come up with these thresholds?

I think you already know that there was a typo in the guidance. So, some of what people reacted to was actually not our intention to put in there in the first place. [Editor's note: Mansfield is referring to language in the draft guidance that seemed to be recommending average coverage for detecting germline heterozygous variants at a depth of 300x at 100 percent of bases using a targeted panel and at least 97 percent of bases using whole-exome sequencing. Stakeholders raised issues with the 300x coverage metric and the FDA has noted it was a typo.] The way that we came up with the numbers we did is, we talked to a lot of laboratorians, we talked to professional societies, read different guidelines, and so on. We came up with particular numbers that we wanted to put out there.

We're aware they are rather conservative. We are also aware that NGS technology, and what people know about how much coverage is needed and so on, is gradually changing as we get more and more comfortable with the technology. So one of the values of putting out a draft guidance is that people have ample opportunity to let us know that they think we got it wrong and why. So, if the stakeholders who believe we're being too stringent or unrealistic think that, then we'd love to hear what is a realistic number and why. In the future, if they think the coverage requirements or depth or anything like that may not need to be as stringent as it is today, we'd like to hear from them how FDA should deal with that as we gain more knowledge.

Realistically, we're trying to put out best practices. We want them to be flexible. We want them to evolve with time. That's one of the reasons we proposed this standards approach because standards can evolve rather quickly as technology and practice changes. Then, FDA needs only to recognize the new standard and we don't have to go through rulemaking or anything like that.

So, if the draft guidance as it is today gets finalized, how much flexibility ultimately would test developers have with the minimum thresholds? If, for example, for a particular test a sponsor couldn't meet the coverage threshold, how much flexibility would there be in the premarket review process?

I can tell you what we typically do when we put out a guidance and we have numbers. Because guidances are not binding, people may always propose a different way to achieve the end goal of analytically and clinically valid. It is absolutely not unusual for people to come in and say, “We want to do something less, more, or different than what you've proposed, and this is why.” We look at their proposal, and we look at their justification, and we agree with them. People in the lab community may not be very familiar with this, but historically, we are rather flexible if you can justify why you're doing what you're doing.

Moving on the second draft guidance on the use of public variant databases. In the process you've outlined, there are multiple features of a database that you've proposed to look at when recognizing them. Stakeholders have pointed out there is a lack of incentives in the field to pursue recognition, but also for labs to submit to public databases. A lot of labs, for example, aren't submitting to ClinVar, the variant database supported by the NIH. Has FDA thought about that, and would you be interested in working on incentivizing these activities?

We absolutely would be interested in working on incentives. We have noted some incentives we believe already exist. For the databases to want to pursue recognition, we think it's possible if they are funded by grant funding or something else that FDA recognition might be a positive for them, and that it's important for this database to be funded in order to go on. We think certain database owners are just going to want the recognition that their database is what we're considering high quality.

I'm aware that labs aren't necessarily submitting a lot of data to databases right now. It's not 100 percent clear why, though there is cost and privacy issues, but we would like to help out in that area. We think an incentive might be that the more developers submit data, the more information there is to use, and the less we have to rely on people generating new information.

Would you consider a premarket exemption for labs that do submit to a public database as a way of encouraging submissions?

I don't know that we can use those kinds of incentives right now under our existing legal structure.

Funding limitations have hindered upkeep of public variant databases, and there are a number of databases right now that require fees for access. So, how are you defining the term “publicly accessible” in the draft guidance? Would there be flexibility in recognizing databases that have different funding structures?

We would be happy to consider that. It would be a great thing for stakeholders to comment on. Our guidance was predicated on that the information in the database was publicly accessible and was silent on whether that involved a fee or not. Publicly accessible means where anybody can access it and there is no hindrance. And if everybody can pay a fee and access it, then we can consider if that's still publicly accessible.

Now that ClinVar is out there, a lot of people who have submitted to the database want that resource to be recognized as an open-access, centralized repository for variant information. But FDA has said it won't limit recognition to a single database. Why do you think it's more worthwhile to have a process in place for recognition of multiple public variant databases, instead of supporting the advancement of one resource?

This is a bit tricky. I don't know if we said we want to recognize a database through this process we put out that we could limit it to a single one and still be operating fairly within our authority. We also are aware that there may be databases that have value that maybe have a different focus than ClinVar, or have different populations. I think it would be great if ClinVar is recognized and people put their information in ClinVar. But I don't think the agency is in a position to say this is the only one and nobody else can play.

The FDA has said that the NGS draft guidances are unrelated to its proposal to regulate LDTs. But some lab industry professionals are confused by that, since most NGS tests are LDTs, and have suggested this is more of a political distinction and not really a practical one. Why is FDA making this distinction at all?

As you are probably aware, we have cleared some NGS tests. We hope to do more in the future. There are IVD companies out there that are making the technology. There are companies we are talking to that don't intend to run a laboratory model for running their tests. And they need guidance, too, regardless of whether laboratories develop NGS. So, in fact, these guidances have nothing to do with laboratory-developed tests per se. It has to do with NGS tests, what we think the best practices are, and if an NGS test is submitted to us for regulatory authorization, what the requirements are.

I know people want to read things into these guidances, but sincerely, they are just not there.

Is there anything else that you wanted to highlight about these draft guidances that I didn't ask about?

One of the first things, based on some of the responses that I know you've gotten when you've interviewed other people, is that I think it's really important for the community to understand the draft guidance process. These are proposals put out for comment. They are not final decisions. So, reacting really negatively to what we've proposed is exactly what we have a comment period for. It's really important for people to comment on what they do or don't like about the guidances, what they think they say, and so on. I want to make sure for labs that aren't necessarily that aware of the guidance process that they understand that this is not a done deal. This is the way we go about putting out guidance.

The other thing is, whether people choose to believe me or not, what we are trying to do is get to the most rational way of oversight that allows tests to evolve, that allows the scientific information that supports this information to evolve, in the most rapid and reasonable way possible. We're highly aware of the potential for innovation. We're highly aware of the fact that new information is just flooding out of research. And we want to make that as accessible as possible to patients, as soon as possible. But we have this duty to protect the public health and look at safety and effectiveness.

What we're really aiming to do is shoot for the best system we can, and the most efficient system we can. The way that we found to get at a lot of this is what we want to continue, which is to talk to the stakeholders, to find out what they think of policy issues and technical issues. We did a lot of that before we published these guidances, and we intend to continue.