Skip to main content
Premium Trial:

Request an Annual Quote

Comparison of Commercial Software Reveals Gaps in Variant Interpretation, Reporting

Premium

NEW YORK (GenomeWeb) – A recent study reporting the results of four commercial pipelines that were used to analyze exome sequence data from a family of four underscores the need for more standards when it comes to variant interpretation and reporting.

The study, which was published in BMC Genomics last month, described efforts to analyze exome data from Manuel Corpas, a data visualization project leader at The Genome Analysis Centre, and his family. The software used for the study were provided by GeneTalk, Diploid, and Ingenuity Systems and BioBase — both of which are now owned by Qiagen.

The companies tapped to do the analysis were chosen because they currently market genome interpretation software and services, Corpas told GenomeWeb. Each company was provided with vcf files from Corpas and three family members and asked to identify variants that would be considered significant from a clinical point of view. "All of them provided the reports that they would have provided their clients," he said.

Representatives from three of the companies which participated in the study — Ingenuity, BioBase, and Diploid — told GenomeWeb that their respective products found many of the same variants as their counterparts in the study. However, their pipelines differ on the central question of which variants are clinically significant. According to the paper, each software categorized different mutations as clinically significant with no overlap in terms of the selected mutations.

The absence of any overlap in their reports surprised Corpas, he told GenomeWeb. "We interpret this as a sign of the fact that the field still needs to mature in terms of current state of the art," he said. "This lack of concordance is not just the product of the different methodologies, it's the product of the different data sources, it's the product of the fact that clearly we don't have any standards right now on how this kind of analysis should be done."

On the one hand, the results are not entirely unexpected because these were "healthy genomes," so the companies weren't looking for variants associated with a specific clinical phenotype.

"I'm not surprised, because in my experience, if you are a generally healthy person, it's very hard to find anything of significance in the genome," Madeline Price Ball, director of research for Harvard's Personal Genome Project, told GenomeWeb. "What's being attempted here is to kind of extract a signal from something that may not have much signal in it." That is not something that is routinely required of existing tests, for example, "we don't demand that we find polyps on every colonoscopy," she said.

Russ Altman, a Stanford University professor of bioengineering, genetics, and medicine, pointed out that the experiment lacked a clear focus, which left much to interpretation. "My initial reaction was 'that's a cool idea and somebody should do it,' but also 'how did they compare? What was the methodology used?'" he said. Based on the information provided in the paper about each company’s approach, it appears that they are trying to do similar things but the details do matter, he said.

Companies have to make decisions about what variants to prioritize and may be working with information from different databases. "If you have a specific question, then I think you can expect a specific answer. If you don't have a question, then you shouldn't be surprised if you get a zillion different answers."

Having a clear research question is crucial in part because it determines how software parameters are set and which filters are used. "Some of these tools are made for exploration; they are made to set you down on the road to discover what caused a disease. So the [company] might start their filters in one way," Ball said. "And then another one might say 'we just want to start with an alert to let you know if there is something [in the data] really well established.' How are those set?"

It's also important to have input from the clinical community. One of the things that Heidi Rehm, chief laboratory director for molecular medicine at Partners Healthcare Center for Personalized Genetic Medicine, pointed out is that the study was not performed as part of a CLIA-certified effort, nor did it appear to have input from trained clinicians, factors that would have made a difference in the way variants were interpreted and reported.

For example, the way a clinical lab would use the Human Gene Mutation Database (HGMD), one of the resources that was used in this study, is as a source of citations that would need to be independently evaluated and verified by other sources, she said. That's because there are cases where there is not enough evidence in the database to justify classifying a given variant as a deleterious mutation and there are times when the data is just plain wrong. "We don't trust the DM [classification]. We just go and get the papers, read them, and we classify according to our clinical rules."

There are also issues with the way information is treated once it's been found, Rehm added. "Each company is making its own determination as to … which traits and how strong the association study needs to be to consider it valid," she said. "Because no one is asking any specific questions of this family's data ... it's up to the whims of what information each of these databases has curated. "

The paper also does not explain what the standards were for determining what the "most significant" variants were, Ball said. Different approaches for determining significance is one of the problems plaguing a database like ClinVar. "[There's] a lot of the stuff in those databases, [and] it's unclear how it got in there [and] what the standards were for the authors," she said. "Sometimes the people who put it in there claim they followed a certain standard, but you don't have a lot of insight into what they saw and they have their own private data ... that they interact with, and you have to trust them to apply the same standards correctly."

It's worth noting that none of the companies that participated in the study offer genome interpretation under a direct-to-consumer model. Belgian firm Diploid, which officially opened its doors last year, provides genome interpretation services to hospitals, commercial and academic clinical laboratories, and the research and development arms of pharmaceutical companies that are trying to identify the genetic bases of rare genetic conditions.

Qiagen Redwood City (formerly Ingenuity) and BioBase sell their genome interpretation software rather than provide services. The Ingenuity Variant Analysis (IVA) software provides tools for identifying and interpreting causal variants from whole-genome, exome, and targeted panel data in different contexts. Biobase's GenomeTrax software provides tools for annotating variants. 

These tools are intended for use in research and clinical contexts where the users are typically researching particular diseases and have access to additional information related to the patient. "[Typically], you have some type of hypothesis about what the particular issue is that allows you to use that context to narrow in on those causative variants, whereas when you are asking a much more general question say 'what is the probability that I will inherit this particular disease?' the question is different," Bryant Macey, senior vice president of products at Ingenuity Systems, said. "They are both answers you may find in the genetic material but the way you get to them is very different."

Peter Schols, Diploid's CEO, expressed similar sentiments. From his company's perspective, "The [project had] all necessary professionals that you want to include in genome interpretation [and so] we were willing to cooperate on this," he told GenomeWeb. "But it's not something that we are considering doing for a consumer audience."

Macey also pointed out that at the time of the initial analysis, Ingenuity and BioBase were both operating as separate companies with different business objectives and goals. Since Qiagen now owns both firms, there's a much tighter integration between their products that would help resolve the differences in their reports.  

Furthermore, the analysis was done more than two years ago, said Frank Schacherer, vice president, discovery genomics at Qiagen, and is based on evidence that was available to the researchers at the time. More recent studies involving some of the variants included in its report show that these are not pathogenic as was previously thought. For example, some of the mutations that were reported by GenomeTrax turned out to come from very variable regions of the genome or to have really high allelic frequencies, and so were probably not disease causing, he said.

Selecting significant variants

The BMC Genomics paper describes how the companies did their analysis in some detail but GenomeWeb asked representatives from Diploid and Qiagen to shed some light on their particular processes for selecting significant variants — GeneTalk did not respond to GenomeWeb's request for an interview.

For Diploid, it came down to the criteria used to filter their variants, Cyrielle Kint, Diploid's chief scientific officer, told GenomeWeb. The company focused its search on inherited variants, variants associated with pharmacogenomics responses, as well as variants not categorized as clinically relevant, but which have some evidence of a genotype-phenotype risk in the literature.

The variants included in its final report were those that had clear evidence of a genotype-phenotype relationship and that had been classified as pathogenic or likely pathogenic based on the American College of Medical Genetics and Genomics' guidelines, Kint said. It did not report any variants for which there was conflicting information about pathogenicity, nor did it report variants reported to confer some kind of health risk but lacked a clear genotype-phenotype correlation.

In Qiagen's case, Ingenuity actually did not perform the analysis using IVA that was reported in the paper —  it was done by a researcher at the NIH, Schacherer told GenomeWeb. However, "I think it was a fair approach that she took."

He also said that at the time the initial analysis was done, the ACMG guidelines were not yet in place, and so there was no standard way at the time to define clinical significance. That analysis also depended on older databases that have since been updated with information from newer published studies.

The current version of the IVA software reports only variants that are classified as pathogenic or likely pathogenic according to the ACMG guidelines. When Qiagen reran it on the Corpas dataset, the version of the software found all nine variants that were reported in the previous analysis, however, based on the ACMG guidelines, only two of the original nine are deemed pathogenic.

BioBase did, however, run the initial analysis that was done with GenomeTrax. Schacherer said he used the software to search for variants in the Corpas family exomes that were curated in HGMD at the time. However, he noted that this sort of experiment was not really what GenomeTrax was designed to do. "[It's] more of a tool that people build into their pipelines to annotate variants. It's not intended as an end-user tool where you actually interactively filter and work with the data."

For Corpas, this project reveals existing gaps in the genome interpretation pipeline. The bottom line, he said, is that "clearly we don't have any standards right now on how this kind of analysis should be done."

Atul Butte, a professor of pediatrics and director of the Institute for Computational Health Sciences at the University of California, San Francisco, pointed to an article that he and colleagues published in 2013 in the Journal of Human Genetics, in which they evaluated risk predictions made for 22 diseases by three direct-to-consumer companies – 23andMe, Navigenics, and Decode Genetics – using data from three Japanese individuals. Their results showed that "the overall prediction results were correlated with each other, but not perfectly matched."

"It is getting more obvious that interpretation of the genome sequence is the harder part, not getting the sequence," Butte said in email. "There are no gold standards for disease prediction that one can use to train computer algorithms against."

This has implications for the use of genomic information, particularly in clinical contexts, Corpas added. "When genome technologies become mainstream in clinical settings, you are going to have to be extremely careful about the source of the reports that you use for those clinical interpretation exercises," he said.

If it were possible to run this experiment a second time, Corpas said he would want to try it with whole-genome data. He would also want to incorporate additional sources of information, such as data from microbiota and from wearable devices such as the Apple Watch.

"I think the biggest constraint is the fact that the data that exists out there to be able to facilitate the interpretation of genomes is not freely available, is not standardized, [and] it's difficult to use and integrate with other systems," he said. "This is a huge problem which is hampering the development of the latest technologies by people ... who have the interest and the skills to contribute."