Skip to main content
Premium Trial:

Request an Annual Quote

Genomic Health Shares First Analytical Validity Data for Liquid Biopsy Test

Premium

This story has been updated with an additional comment from Genomic Health.

NEW YORK (GenomeWeb) – Genomic Health has followed up the launch of its liquid biopsy assay earlier this year with the first public data that describe in detail the test's analytic validity.

The company presented the results in a poster at last week's European Society for Medical Oncology 2016 Congress in Copenhagen, reporting per-sample sensitivity down to a 0.1 allele fraction for single nucleotide variants, or below three copies for CNVs, with more than 99 percent specificity.

According to Genomic Health, the test was able to maintain a 95 percent detection rate as long as tumor DNA was present with a frequency of at least 0.19 to 0.56 percent compared to background DNA (depending on the type of genetic alternation in question.)

The company believes that its methodology for describing these results — on a per-sample basis rather than a per-nucleotide basis — is more transparent, and better for helping clinicians or other potential customers understand both the analytical potential and the limitations of its test.

The importance of transparency in describing liquid biopsy test validity was a subject of discussion and debate earlier this year at a joint workshop organized by the US Food and Drug Administration and the American Association for Cancer Research, which included representatives from several other firms who have launched sequencing-based blood tests for cancer patients over the last few years.

As these assays have multiplied, their relative strengths and weaknesses, as well as the overall validity (and ultimate utility) of blood-based cancer testing has been opened to greater scrutiny.

At the FDA-AACR meeting, Foundation Medicine chief scientific officer Phil Stephens argued that labs should validate assays before launching them for commercial clinical use.

"If a company offering a CLIA-based test has not released an analytic validation it does not know what the accuracy of its test is and neither do the treating physician or the patients," he said.

Foundation began sharing validation data at scientific meetings months before it officially launched its FoundationACT ctDNA assay, although it hasn't yet published its validation in a scientific journal.

In contrast, many of the other companies currently marketing ctDNA sequencing tests through a CLIA lab launched their tests before releasing much of any detail on their analytical performance experiments.

PGDx, which announced its clinical pan-cancer ctDNA test earlier this month, hasn't yet published on its analytical validation, though it has shared results from studies using research-based methods similar to the new clinical version of the test.

Guardant Health launched its Guardant360 pan-cancer sequencing test the year before publishing its own validation in PLOS One in 2015. However it did discuss some of its analytical and clinical validation findings at scientific meetings in the interim.

Resolution Bioscience published validation data for its assay a few months after announcing its clinical launch in 2015. And Genomic Health's test has now been available for about three months prior to the validation data that came out at the ESMO meeting last week. 

Even when validation data becomes public, the way results are described has varied, potentially making it hard for physicians or other consumers, even expert ones, to understand how these reports speak to the accuracy of tests both individually and in relation to one another.

For example, Genomic Health reported sensitivity in its ESMO poster on both a per-sample and per-variant basis. In other words, sensitivity is described as the lowest level that can be detected a certain percentage of the time, as opposed to just the lowest detectable level.

In the ESMO poster, Genomic Health created a table that lays out the lowest allele frequencies at which it was able to find particular types of variants 95 percent of the time. The company's test can detect alterations at lower frequencies — down to 0.1 percent — with the same 99 percent specificity. But the detection rate isn't 95 percent at that point.

"When it comes to limit of detection, that is the threshold at which we detect a variant 95 percent of the time at or above the threshold allele frequency provided in the table. We know that we will detect variants at lower allele frequencies and that these are likely valid given our high specificity," Genomic Health chief medical officer Phil Febbo explained.

"Thus, the limit of detection is more informative for a negative report," he added. "If we do not find a variant, the physician and patient can be confident that there was less than a [five percent] chance of having the variant at an allele frequency above the threshold listed."

In a positive report, the specificity is more important, Febbo argued. "[That] means that there is less than a [one percent] chance of the reported variant being a false positive even when the allele frequency is below our established [95 percent] limit of detection threshold," he said.

For Guardant Health, the PLOS One study indicated that Guardant 360 can achieve a limit of detection of 0.25 percent mutant allele fraction (MAF) while still detecting 80 percent of samples. Another 30 percent of samples could be detected at 0.1 percent MAF or lower, the authors wrote.

However, in promotional materials, Guardant presents the situation more simply, stating that its test has a LOD of 0.1 percent per base, but not indicating how that changes per sample with a defined 80 percent (or 95 percent a la Genomic Health) detection rate.

Industry observer Girish Putcha — who is director of laboratory science at Medicare contractor Palmetto GBA, but was not speaking to GenomeWeb in that capacity — said that both strategies for looking at accuracy have downsides. It is best, he suggested, to report both, as Palmetto's MolDx recommends in the analytical performance specifications it has developed for ctDNA testing.

At the FDA-AACR workshop this year, Putcha also raised the issue of sub-LOD reporting —where a company might report results to physicians that are below its published limit of detection.

"When you've done the experiment to characterize what your LOD is, that is your limit of your detection. If you want to report something below that and a pathologist or a physician wants to act on that, that's their prerogative. I think it is at least the lab's responsibility to make it abundantly clear that's exactly what you are doing — reporting something below your validated LOD," he said at the meeting.

Genomic Health's poster notes that levels lower than 0.1 percent may be reported. Febbo explained in an email to GenomeWeb that when the company reports variants to physicians below the threshold for 95 percent sensitivity, it does not impact specificity because the test's specificity is based on detection of the variant regardless of allele frequency. 

How tests deal with different classes of variants — SNVs, copy number variation, indels, and fusions — can also vary in the validity data available. For Guardant, data has been limited on direct sensitivity comparisons for these different variant types, though it claims that Guardant360 accurately calls all four classes at less than 0.1 percent mutant allele frequency or as low as 2.12 gene copies.

In its ESMO poster, Genomic Health lists all four variant types in the same table, with the same 95 percent detection rate, per-sample LOD calculation, as well as ultimate per-base limits of detection.

Putcha told GenomeWeb that MolDx has indicated in its specifications for ctDNA test validation the need to clearly establish performance by variant type because it can be meaningfully different.

At the FDA-AACR meeting he also highlighted the fact that while publication of these validation metrics is a good start, third-party verification of such analytical claims would be far better.

The problem with the transparency [is that tests] ought to be third-party verified — not just a claim by an interested party about what the performance is," Putcha said at the workshop. "You can provide all the transparency in the world that you want but if it’s your data and your claim, with respect, I'm a little skeptical."

MolDx has started trying to make public this kind of verification in its evaluations for local coverage decisions. Lab regulators like the College of American Pathologists and the NY State Department of Health largely do not, Putcha said in an email to GenomeWeb.

According to Febbo, third-party validation is important, but he argued that the validation studies conducted by companies like Genomic Health can often be much more rigorous than might be done by an academic group or other outside party.

A molecular testing firm has much to lose if it screws up validation and later is brought to task for it, so it is in the best interest of testing firms to rigorously validate their tests. Despite this, he said, history — both remote and recent — has shown that this is not always the case.