Skip to main content
Premium Trial:

Request an Annual Quote

Clinical NGS Test Developers Discuss Sample Prep Challenges of Assay Validation

Premium

SAN FRANCISCO – (GenomeWeb) – Developers of next-generation sequencing-based clinical tests said at Cambridge Healthtech Institute's Molecular Medicine Tri-Conference this week that sample prep should be a key consideration when validating tests.

Presenters discussed issues specific to a range of NGS-based tests including a targeted test to look for somatic mutations from solid tumors, whole-genome sequencing to analyze germline variants, a test designed to analyzing very low frequency somatic variants from cell-free DNA, and a shotgun sequencing metagenomic infectious disease test.

According to the researchers who presented, test developers should consider the specific questions any given assay is intended to address and figure out the appropriate design for that given assay. In addition, there is likely not a one-size-fits-all approach to developing NGS-based assays, even within the same laboratory.

For instance, Geoff Otto, a senior director at Foundation Medicine, said that in designing the firm's circulating tumor DNA assay, there were separate considerations from its tumor-based assay that had to be accounted for. It wasn't just a matter of tweaking the existing, validated FoundationOne test for solid tumors to analyzing cell-free DNA, he said.

Similarly, David Hillyard, director of molecular infectious disease testing at ARUP Laboratories, said that a key component for infectious disease assays that is often overlooked in other types of NGS-based tests is DNA extraction.

Overall, the test developers said that off-the-shelf kits typically did not suffice without significant modifications for running clinical assays.

William Biggs, head of sequencing operations at Human Longevity, said that in designing its whole-genome sequencing assay it assessed Illumina's TruSeq Nano off-the-shelf kit for library prep, and while it worked well and yielded good results, it "wasn't automation friendly."

The firm sequences around 640 genomes per week, he said, so automation is key. Instead, they worked with Kapa Biosystems to modify its Hyper Prep kit for library prep. Designing the custom library prep took about six months, he said, but the result is that library prep is now scalable and automated. Reagents are pre-aliquoted onto pre-PCR plates, and three plates can be cycled at once through a liquid handling robot.

Aside from library construction, the sequencer itself is important to consider, Biggs said. While differences between sequencing vendors' technologies have been well documented, Biggs noted that even within the same family of instruments there are differences. Human Longevity runs 24 of Illumina's HiSeq X instruments, which are different from the rest of Illumina's HiSeq family of instruments in that they use patterned flow cells, Biggs said. "As a result of this, these systems have a very discreet sensitivity profile," he said. A critical factor for the HiSeq X instruments is loading the correct amount of sequencing library onto the flow cell, he said.

Once locking down the proper library prep methods for the assay and figuring out which components would impact performance, Biggs said the Human Longevity has run over 400 iterations from the National Institutes of Standards and Technology's reference samples developed from the Genome in a Bottle consortium to get a handle on its false positive and false negative rates. "Repeated analysis of this control sample gives us a very granular view of our accuracy," he said.

Since Human Longevity began sequencing whole genomes about two years ago, the firm has sequenced more than 23,000 samples, or about one genome every 16 minutes, Biggs said. The firm targets an average of 30x coverage per sample and typically gets more than 95 percent of the bases covered at 10x coverage. If libraries fail to meet the 30x average coverage goal, the firm will resequence them — and its resequencing rate is less than 1 percent, he said. In addition, its duplication rate is just 4 percent, and 95 percent of its paired-end reads are mappable.

Foundation's Otto, meantime, described very different challenges specific to designing an NGS assay that it plans to launch this quarter to assess cell-free tumor DNA.

For instance, he said, in its experience running its FoundationOne tissue-based assay, the company became very good at analyzing DNA from formalin-fixed paraffin-embedded samples, and has even begun to like FFPE. "It's stable and amenable to review and quality control by the pathologist," he said. But, when the firm decided to develop an assay based on circulating tumor DNA, it took on very different challenges.

For instance, he said, one lesson has been that the manner in which a blood sample is collected is critical. He described one example of analyzing a tissue sample and matched cell-free sample, which the researchers knew had a KRAS mutation. Initially, the KRAS mutation was not detected in the cell-free sample, despite being present in the tissue sample. But, he said, the cell-free sample was a "suboptimal collection." It had been in storage for a prolonged period of time in an EDTA tube. Plasma removal was incomplete, so there was a large amount of genomic DNA in the sample and little to no ctDNA. A sample collected from the same individual but in a Streck tube and under optimal conditions — i.e., no genomic DNA and plenty of ctDNA — the researchers observed the KRAS mutation. "Sample prep QC is critical," Otto said.

Other types of oncology-based tests have different considerations than for cell-free DNA. Mickey Williams, director of the molecular characterization laboratory at Frederick National Laboratories for Cancer Research, has helped design two NGS-based assays to be used in National Cancer Institute-sponsored clinical trials — the 143-gene test that is being used in the NCI-Match basket trial and the assay for the NCI-MPACT trial — both of which run on Thermo Fisher's Ion Torrent PGM.

Williams said that in designing and validating the assays, it was important to understand the characteristics of the sequencing system itself and to be able to demonstrate little intra-laboratory variability, since multiple labs would be running the test to stratify patients into clinical trials.

At minimum, the assay reports over 4,000 known variants, Williams said. Validating each one would have been a cost-prohibitive task, so the researchers had to figure out a way to validate the entire test by demonstrating analytical performance on a representative subset of variants.

Because it was an NCI-sponsored clinical trial, the researchers worked closely with the US Food and Drug Administration in the design of the test. 

ARUP's Hillyard said that his group is in the midst of multiple clinical validation studies for a sequencing-based assay to detect infectious disease agents, comparing the metagenomic approach to standard tests at Arup. In an initial study, published in the Journal of Clinical Microbiology in January, the Arup and University of Utah team compared an RNA metagenomics sequencing approach using a web-based data analysis tool called Taxonomer with an FDA-cleared respiratory virus panel developed by GenMark.

The NGS test "had very good pick-up of the panel components," Hillyard said, and "follow-on studies have shown that it's very good." Nevertheless, he said, validating the test has been tricky. In particular, nucleic acid extraction has proven to be challenging, with different techniques working for different sample types and different infectious disease agents. For instance, he said, although bead-based extraction methodologies are considered the gold standard, "not all beads are the same," he said.

In addition, there is a lot of variation in the types of viruses, with influenza being relatively easy to extract, but gram positive bacteria much more difficult. There is a need in the field for "robotic high-throughput pan-organism extraction," he said.