Skip to main content
Premium Trial:

Request an Annual Quote

7 Methods Rise to Top in IBM-Columbia’s ‘DREAM’ Reverse-Engineering Challenge

Premium
This article has been updated from a previous version to clarify the judging process for the challenges and to correct the spelling of Gustavo Stolovitzky's name.
 
While it was designed as more of a dialogue than a dog race, the first methods-assessment conference in computational systems biology this week identified a handful of algorithms that outperformed their peers in accurately reverse-engineering biological networks.
 
The event, Dialogue for Reverse Engineering Assessments and Methods, or DREAM, held at the New York Academy of Sciences this week, determined that of a field of 36 teams chosen by 50 distinguished scientists, seven groups had developed the most accurate reverse-engineering methods in five categories.
 
The conference, sponsored by IBM and Columbia University, grew out of a desire to help the computational systems biology community evaluate the performance and accuracy of its algorithms and was modeled after the successful CASP (Critical Assessment of Structure Prediction) meetings in the protein structure prediction community [BioInform 02-17-06].
 
“One of the things that is good about CASP is it really got people to measure their methods against each other or change methods,” John Jeremy Rice, a member of IBM’s T.J. Watson research center, told BioInform this week.
 
Likewise, he said, the DREAM event ensured that academic groups must test their prediction method against others’ instead of simply claiming to be superior.
 
Gustavo Stolovitzky, functional genomics and systems biology manager with IBM’s Computational Biology Center, told BioInform that “there are good methods and bad methods, and … it’s good to know which methods perform well so we can understand what would be the next step for all of us to build upon [to produce] the best performing methods.”
 
Five Categories
 
The DREAM evaluation comprised five different challenges: identifying “true” targets of the BCL6 gene as opposed to decoys in a gene expression data set; predicting which gene pairs belonged to a “gold standard” data set in a protein-protein interaction subnetwork; predicting a five-gene network from in vivo measurements of a modified model organism; predicting the properties of one or more in silico networks; and reconstructing a genome-scale transcriptional network.
 
The methods were evaluated automatically using "objective algorithms," Stolovitzky said. The evaluation runs took one day to process on a workstation.
 

“There are good methods and bad methods, and … it’s good to know which methods perform well so we can understand what would be the next step for all of us to [take in our research].”

In the BCL6 transcriptional-target challenge, two teams were selected as the most accurate out of a field of 11 groups.
 
The scores of the two teams — from the National University of Singapore, and the Genome Institute of Singapore “were very close to each other,” Stolovitzky said.
 
The challenge consisted of identifying which targets were true and which were decoys, using an independent panel of gene-expression data.
 
Stolovitzky said that of the 11 teams that participated in the BLC6 challenge, six teams did as badly as a random predictor would have done.
 
The best performer in the protein-protein interaction challenge was Team NetMiner of Institute for Infocomm Research.
 
Stolovitzky noted that the predictability of protein-protein interactions “seems to be harder than predicting the targets of transcription factors.”
 
“Of the first 200 predictions in this challenge, the best performer only had 20 correct, he said. In addition, of the five participating teams, “four of them didn’t do much better than chance.”
 
The best performing teams for the remaining three challenges were Team AGE from the Laboratory of Intelligent Systems, Ecole Polytechnique Federale de Lausanne, Switzerland and Team RAGNO from the Center for Advanced Studies, Research and Development in Sardinia in category 3; Team TIGEM (Telethon Institute of Genetics and Medicine) and Team GustafssonHornquistLundstrom in category 4; and co-winners Teams LBM at NIDDK National Institutes of Health co-sharing with Team GISL from Columbia's Electrical Engineering department in category 5.
 
Challengers’ names who did not perform well were kept anonymous, Stolovitzky said, “because we don’t want anyone to feel that by participating they are hurting their chances of continuing doing research in this field.”
 
At least one of the scientists attending the event criticized this methodology, saying it would be better if all results and identities were disclosed. Indeed, Stolovitzky agreed and said that he would consider disclosing all participants in upcoming DREAM challenges in order to better balance the experiments.
 
“Last year we explored what the gold standard for the challenge could be and how to score the challenges,” said Solovitsky. “We are preparing for this event [to continue]. We hope the momentum is such that we can continue with a yearly event.”
 
Additional information about the event can be found here.

Filed under

The Scan

Purnell Choppin Dies

Purnell Choppin, a virologist who led the Howard Hughes Medical Institute, has died at 91, according to the Washington Post.

Effectiveness May Decline, Data From Israel Suggests

The New York Times reports that new Israeli data suggests a decline in Pfizer-BioNTech SARS-CoV-2 vaccine effectiveness against Delta variant infection, though protection against severe disease remains high.

To See Future Risk

Slate looks into the use of polygenic risk scores in embryo screening.

PLOS Papers on Methicillin-Resistant Staphylococcus, Bone Marrow Smear Sequencing, More

In PLOS this week: genomic analysis of methicillin-resistant Staphylococcus pseudintermedius, archived bone marrow sequencing, and more.