This article has been updated to correct the name of the panel's moderator.
AUSTIN, Texas (GenomeWeb) – At the annual meeting of the Association for Molecular Pathology this week, Illumina sponsored a panel in which leaders in the field discussed practical, reimbursement, and other considerations for labs implementing clinical next-gen sequencing.
During AMP's corporate workshops on Wednesday, Illumina’s John Leite introduced and guided the three-member panel — Charles Mathews of Boston Healthcare Associates, Suzanne Kamel-Reid of Toronto’s University Health Network, and Marisa Needham from Duke University — who spoke to the reimbursement and economic value, the real-world actionability, and the nuts and bolts of implementing and validating clinical NGS.
Central to the discussion was the interplay between all the central issues of actionability, implementation, and reimbursement.
Needham, who, prior to joining Duke, was a leader in validating and implementing the Illumina TruSight One, TruSight Tumor, and TruSight Myeloid Enrichment Panels on the MiSeq at the University of Virginia, shared her insight on the question of validating sequencing against orthogonal technologies.
With NGS entering the clinic, labs must find ways to demonstrate that this newer technology is equivalent and valid compared to a gold standard. However, the increased sensitivity of NGS relative to older methods like Sanger sequencing can make direct comparison difficult or impossible.
"Sanger is going to be difficult, for example, if you are looking at variants with a low frequency," Needham said. In light of this, at UVA, she said, the team looked to other methods, like pyrosequencing, against which to validate their new NGS strategies. At Duke, meanwhile, the lab is working with locked nucleic acid primers to increase Sanger sensitivity, she said.
Boston Healthcare’s Mathews said that in his broader reviews of the lab community, it has become clear that Sanger can only be taken so far. "There is a lot of Sanger confirmation initially, but then you move away from that safety net as you begin to trust the [NGS] assay."
"Where I think we are headed," Mathews said, "is more 'let's prove the methodology, let's prove that sequencing is accurate, and if it’s there if will find it,' and less about [validating each target] even if it takes 10,000 samples [to get one positive result.]"
"Really the orthogonal assay should be any assay you have in the lab," Kamel-Reid argued.
"I’m sorry to say this in an Illumina venue," she added, "But we actually tested a similar platform on both the [Thermo Fisher] Life Tech and Illumina platforms to validate that way. If [the data] is reproducible it gives you good confidence that it's actually real changes. If you have the fortune of having multiple platforms, that’s another way to do this."
Separately, the question of whether there is a need for matched tumor and normal sequencing also came up in the panel discussion. Tumor versus germline sequencing is an issue that has come to the fore in the NGS field since a publication by Johns Hopkins researchers earlier this year, which concluded that tumor-only analysis "may lead to inappropriate administration of cancer therapies with substantial effects on patient safety and healthcare costs."
“We do [sequence matched normal DNA] and I think it's extremely important for larger panels,” Kamel-Reid said. “But for the smaller panels [like the] TruSight 26-gene panel we are not. We don’t need to. I think we know what we expect to see and if we see something unusual we could then get a blood or a buccal swab.”
“Another thing we do is use our own data to teach ourselves about variants we see frequently, using data amassed over thousands of samples to build a database so we can avoid having to do matched normal sequencing,” she said.
As NGS has opened the door to much more comprehensive genomic analyses than previous technologies, the question of implementation has also come to encompass bioinformatic strategies that had little place in these practical considerations even just a few years ago.
"The question of implementation raises a huge point," Needham said. "At UVA there was very little bioinformatics support in house so we had to outsource it."
At first using only one informatics pipeline, Needham and her colleagues found themselves making an embarrassing miss of a KIT mutation on a proficiency test.
"It was there. We could see it in the sequence data," she said. "But the analysis had completely missed it." UVA then shifted to using two parallel analysis pipelines, she said, noting their concordance or discordance, and then going back to the sequence data in cases where the two do not match up.
"It's very tedious. It takes a lot of time, but there is something that is always missed by one and the other bioinformatics pipeline catches it. This is something that isn’t widely talked about but is very important."
The new role of bioinformatics also has a connection to the question of value and reimbursement, Mathews said. As part of a project for AMP aimed at studying in detail the value and costs of clinical sequencing, he recently conducted an in-depth analysis of the activities of 15 labs, going through their protocols step by step.
"Some of the most complicated aspects we saw were the bioinformatics," he said. "That historically has not existed before. An analyzer bumped out a result positive or negative. Whereas now, getting to the FASTQ files is just one part of the process."
"That’s part of why defining the value of NGS is so complicated because so much of the value is on the back end," Mathews said.
Meanwhile, questions of actionability and value were also closely intertwined in the panel discussion.
"From my perspective something that is clinically actionable not only impacts information about pathogenicity, but anything that would alter patient management as well, so whether it is druggable, or prognostic, or aids in diagnosis, or helps you determine predisposition in a family. There are a lot of different definitions of actionability," Kamel-Reid said.
However, this expansive view of clinical utility has made it difficult for the lab community to make the case for NGS to payors, Mathews responded.
Asked by Illumina's Leite about the ways different stakeholders view the value of clinical genomics, Mathews said that much of his work is devoted to trying to find a way to bridge the dichotomy of such different viewpoints.
"A lot of labs have jumped into NGS and seen the potential to replace single gene or single biomarkers with more comprehensive solutions" he said. "Its really driven by clinicians, but in some cases they are asking for more information and I'm not sure they always know what they’ll do with it once they get it."
"That’s where I think the disconnect with payors occurs," he explained.
Clinical NGS does have inherent value, he added, but in the near term, he advocated a strategy of focusing on targeted applications, rather than broad utility of NGS when making a case to payors.
"Right now I think payors have a very strong understanding of targeted applications for NGS … NIPT [for example], to me that’s a great case with a very clear value proposition in a space where traditional tools could not accomplish the same thing."
On the other hand, Mathews said, something like Foundation Medicine's testing paradigm is at "a premium price point, and [I think payors are wondering] how much is this a sales and marketing job and how much is it going to fill a real clinical need."
Part of the problem is the current therapeutic landscape. "We only have a limited number of targeted therapies now," he said. "Eventually, when we live in a world of 20 different targeted therapies, when we get to that level, it's going to be a very clear value proposition. So in some ways NGS is a technology ahead of its time."
"The key is going to be defining applications that have value right now … focus[ing] on specific clinical applications, for example, liquid biopsy, and … finding those clear-cut cases to bring to payors."
Finally, all on the panel agreed that the laboratory and clinical community must do a better job of collecting data that reflects on tests' real-world impact and practical value.
"As a lab, you've got this tool and made the case to your administration to buy the instrument and now you are doing this in practice," Mathews said, "But we all need to then work together to collect data and get out some publications looking at how it actually changes things. As an industry we haven’t done much of that and I think that’s why we are seeing some struggles with reimbursement for complex molecular profiling."
From a clinician's perspective, Kamel-Reid said that the options and choices available — the decisions to be made as to how many layers of information a physician actually needs to make a clinical decision or a pathologist needs to define a diagnosis — have become overwhelming.
"Should we be doing small panels, larger panels, RNA-seq, what about translocation [and] methylation?" she said. "Are we going to have a one-stop answer for this, or use multiple approaches?"
"Right now I think it's an exciting time but it can also be overwhelming when you are thinking about what to invest in in a laboratory … so the more education we can get around all that the better.
Kamel-Reid highlighted during the panel one new step in that direction, an international genomic and clinical data sharing project called GENIE announced today by the American Association for Cancer Research.
GENIE, which stands for Genomics, Evidence, Neoplasia, Information, Exchange will pool CLIA- and ISO-certified sequencing data from the participating institutions into a single registry and link them with select longitudinal clinical outcomes.
Project participants are hoping the registry will help with validating gene signatures of drug response or prognosis, identifying new patient populations for FDA-approved drugs, expanding patient populations that will benefit from existing drugs, and identifying new drug targets and biomarkers.