Skip to main content
Premium Trial:

Request an Annual Quote

Informatics Applications for Pharmacogenomics, Text Mining Among Trends Highlighted at TBI 2013

Premium

This year's conference on Translational Bioinformatics featured posters and presentations focused on applying informatics tools and technologies to areas such as pharmacogenomics; text mining; understanding the interactions of drugs and genes; and the ethical, legal, and privacy concerns associated with using genomics in both research and clinical care.

Among the presentations at this year's TBI was a poster that described a method called the clinical decision support optimization platform, developed by researchers at Harvard Medical School and the University of Milwaukee, which is used to optimize treatment with the blood-thinner warfarin.

According to the poster abstract, the platform simulates patients' response to a particular treatment protocol and predicts outcomes based on clinical and genetic characteristics. Next, it uses an optimization technique to identify one protocol — out of six tested — that minimizes the risk of bleeding and stroke. The method then uses cluster analysis to bring together patients with similar characteristics and responses to find general treatment rules.

In a separate presentation, Harvard Medical School researchers described a pharmacogenomics-based approach for analyzing drug-gene pairs and generating reports that are then attached to electronic medical records. According to one of the developers, Vincent Fusaro, a postdoctoral researcher at Boston Children's Hospital, the researchers have used it to standardize thiopurine S-methyltransferase (TPMT) testing at the hospital. TPMT is an enzyme encoded by the TPMT gene that metabolizes thiopurine-based drugs. Fusaro said that the researchers plan to apply the approach to other drug-gene pairs.

Another presentation from the conference described a meta-prediction method that uses random forests to combine the outputs of four SNV function prediction methods — PANTHER, PhD-SNP, SNAP, and SIFT — in order to better predict disease-associated SNVs. It was developed by researchers at the University of Alabama, Stanford University, and Rutgers University.

Another talk highlighted the Investigation/Study/Assay, or ISA, suite of software tools developed by a team from the European Bioinformatics Institute, Oxford University, Harvard University, and other institutions. ISA is a set of metadata tracking tools that facilitate standard compliant collection, curation, management, and reuse of datasets (BI 9/3/2012 and BI 2/3/2012).

According to the presentation, given by Shannan Ho Sui, a research scientist in the Harvard School of Public Health, researchers involved in the Stem Cell Discovery Engine and the Harvard Stem Cell Institute who were already using ISA's tools for their respective stem cell resources were able to combine both repositories into a single open source resource called the Stem Cell Commons that brings together datasets, online tools, codes with experiments, and results. It currently includes over 3,000 bioassays from 182 stem cell experiments. The researchers are also developing a tool called Refinery that will enable users to transfer data from the Stem Cell Commons into a Galaxy pipeline for analysis and then retrieve and visualize the results.

During one of the keynote sessions, Steven Friend, president, co-founder, and director of Sage Bionetworks, discussed his company's efforts to develop and use Synapse, an infrastructure that combines into a shared space scientific data from resources such as the Cancer Genome Atlas and the Gene Expression Omnibus, software, and disease models. The platform consists of a web portal and web services; integrates with data-analysis tools; and is organized around analysis communities that any scientist can create or join.

Its features include detailed descriptions of available datasets and a mechanism to access data that uses common formats, controlled vocabularies, and annotation standards. It also includes workflow tracking tools as well as IT infrastructure from Amazon Web Services.

The system, which is currently available in beta, will be used in a partnership between Sage Bionetworks and the organizers of the Dialogue for Reverse Engineering Assessment and Methods, or DREAM, to run open computational challenges focused on scientific discovery and clinical research (BI 2/22/2013). It was also used in a breast cancer challenge during last year's DREAM contest that focused on developing algorithms to predict breast cancer survival (BI 6/15/2012).

Meanwhile, Paea LePendu, a research scientist at Stanford's Center for Biomedical Informatics, presented a method of mining text in clinical notes in order to identify adverse drug-drug interactions in cases where two or more drugs are used. The method combines ontologies from the National Center for Biomedical Ontology and odds ratios.

Another presentation from researchers at Vanderbilt University described a method of identifying phenotypes from electronic medical record data that uses an unsupervised feature learning technique.

Finally, researchers from Columbia University discussed how they applied an algorithmic approach developed by the Electronic Medical Records and Genomics Network, or eMERGE, consortium, to analyze phenotype data associated with drug-induced liver injury. The eMERGE consortium offers access to validated algorithms through its website to identify patients with specific disease phenotypes based on data in their electronic medical records (BI 3/23/2012).

Other presentations focused on efforts to fight cancer. These included a network-based technique from the University of California, San Diego, for stratifying somatic mutations that uses network smoothing and clustering approaches to identify cancer subtypes; a method that integrates multiple layers of genomic data to improve clinical outcome predictions in cancer from a team at Seoul National University; and the Georgetown Database of Cancer, or G-DOC, a repository of cancer information and tools developed at Georgetown University's Lombardi Comprehensive Cancer Center (BI 10/29/2010 and BI 10/26/2012).

The conference ended with a review of select papers from the last year that discussed applications of bioinformatics in the translational medicine space that was given by Russ Altman, a professor of bioengineering, cenetics, and medicine at Stanford.

His round-up, which covers informatics applications that link biological and clinical entities, included studies that looked at capacity of genomic data to identify disease risk, a Cell paper that described an effort to develop an in silico whole-cell model of a living cell (BI 8/3/2012), and a method for re-identifying study subjects from their genomic data (BI 1/18/2013).

Filed under