Skip to main content
Premium Trial:

Request an Annual Quote

DREAM 5 Entries Show Progress in Reverse-Engineering Biological Networks, with Room for Improvement

Premium

By Uduak Grace Thomas

Entries in this year's Dialogue on Reverse Engineering Assessment and Methods, or DREAM 5, underscored the progress that has been made in computational systems biology in recent years, though organizers of the challenge emphasized that there is still plenty of work ahead for the field.

Gustavo Stolovitzky, functional genomics and systems biology manager with IBM’s Computational Biology Center and a DREAM 5 organizer, told BioInform that although this year’s entries were good and participation was strong, there is still room for improvement for next year’s challenge.

Results of the best performing groups in DREAM 5 — representing four separate challenges — were presented this week at the 3rd Annual Joint Conference on Systems Biology, Regulatory Genomics, and Reverse Engineering Challenges at Columbia University.

This year, 73 teams participated in one or more of the challenges — up from 53 teams that participated in DREAM 4 last year and 40 teams that participated in the 2008 challenge.

Stolovitsky noted that while many DREAM 5 participants performed well in challenges that involved in silico networks, the results were “not so good” in tasks that involved in vivo networks, where gaps in experimental data mean that there are no “gold standard” networks to use for benchmarking.

Ahead of future DREAM challenges, Stolovitzky noted that the community now “need[s] to create gold standards and … we need to do more experiments and … find the right collaborators for that,” he said.

“The in silico networks are starting to reach their potential and [yield] diminishing returns … we know how to solve those," he noted. "I think the real challenge is in the in vivo networks.”

Four challenges comprised this year’s event: the Epitope-Antibody Recognition Challenge, in which participants used peptide sequence data to predict the binding specificity of peptide-antibody interactions; the TF-DNA Motif Recognition Challenge, where participants used data from protein-binding microarrays to predict the specificity of a transcription factor binding to a 35-mer probe; the Systems Genetics Challenge, where participants predicted disease phenotypes and inferred gene networks from systems genetics data; and the Network Inference Challenge, where participants inferred gene regulatory networks from simulated and in vivo gene-expression microarray data (BI 05/28/2010).

During the conference, Robert Prill, a member of the systems biology group at IBM Research and a DREAM organizer, noted that teams that participated in the first challenge had very similar scores overall while the final scores for participants in the remaining three challenges were more spread out.

One reason for this disparity in the results, according to Stolovitzky, was that the first challenge “was purely experimental” and the training set contained enough information to make good predictions. Although there was data available for the other three challenges, they were more difficult to solve for a variety of reasons.

“Sometimes it’s quality of data, what you perturb [and] what you don’t … [its] not just the size or number of samples … but we need to think a little bit more,” he said.

Winning Entries

A total of 15 teams participated in the Epitope-Antibody Recognition Challenge, with the winning prediction coming from a group at the University of Maryland that called itself Team Pythia, which used an ensemble of support vector machines to make its predictions. The University of Pavia's Team Pavia, which adopted a knowledge-based approach, came in second place.

For the TF-DNA Motif Recognition Challenge, 14 teams submitted entries. First place was shared by Team csb_tut from Finland's Tampere University of Technology and Aalto University, which used a linear affinity model; and Team ACGT, from Tel Aviv University, which used a motif-finding algorithm called Amadeus for its predictions.

The Systems Genetics Challenge had two separate datasets: dataset A, based on in silico data and designed to infer causal network models among genes; and dataset B, based on experimental soybean data and designed to predict complex phenotypes from a combination of genetics and expression data.

A total of 16 teams submitted predictions for these two datasets. The winning entry for dataset A came from Team SaAB_meta from the French National Institute for Agricultural Research in Toulouse, France, which used a meta analysis approach consisting of three tools to reconstruct the gene networks. The winning entry for dataset B was Team Orangeballs from the Massachusetts Institute of Technology, which used maximum correlation, minimum redundancy regressors to predict soybean disease phenotypes.

Finally, 29 teams participated in the Network Inference Challenge, with the winning entry coming from Team ulg_blomod, from Belgium's University of Liege and Ghent University, which used an algorithm called the Gene Network Inference with Ensemble of Trees, or GENIE3, to predict the networks. Team Amalia from the University of Munich, which took second place, used a two-way ANOVA-based strategy for its predictions.

A complete list of the best performing entries is available here.

Future Plans

During a group discussion that followed the winning presentations, several participants called for more openness in the results, urging organizers to publish the names and results of all participants, including those that performed badly. Others, however, opted to retain the current model in which all teams remain anonymous except for the top-scoring entries.

In response, Stolovitzky maintained that “our motto is, 'Do no harm.' We want to help the community by generating a forum where everybody profits from the collaboration … If there is a hint that someone could suffer from it, then I would like to avoid that situation.”

He said that one compromise would be to make all the predictions available on “an anonymous basis” and “people who are okay with it [can release their names] and people who are apprehensive … [can] keep them anonymous.”

Moving forward, Stolovitzky said that the organizers are considering including a “grand challenge” that would be geared towards addressing “questions of biological significance.”

“The modality we have used [for DREAM is that] someone knows the answer, someone makes a prediction, and then we see whether the prediction is good compared to the known answer,” he said. “But it seems to us now that tapping the intellectual acumen of this community could produce more than just learning how good the algorithms are; we can start to produce new hypotheses that could be verified experimentally."

He continued, “If we can do that, then we are not just testing our algorithms but using our algorithms [to] create a hypothesis, test it, [and] learn from it … If we do it as a community, it's less likely that we will fail at that.”

Along these lines, he added what he described as a few lessons learned from this year’s challenge.

“I think the biggest lesson learned is that if you aggregate the solution that the community gives … the result tends to be almost as good as the best prediction and often better than that,” he said. “If you aggregate many methods, that’s a winning strategy because in principle you will get the best of the best and the errors of the not-so-good algorithms for that task will be averaged out.”

Andrea Califano, a professor of biomedical informatics at Columbia University and a DREAM organizer, echoed his sentiments, noting that DREAM is “emerging now potentially as an organism-size project.”

“Instead of thinking of science done by individuals, you can think of science done by a community … where the prediction of every group by [itself] may not be very specific but when you combine them together … that predictive capacity of the organism is much better than the predictive capacity of the individual cells, to use the metaphor,” he said.

Califano told BioInform that there has recently been a “shift” in the community’s focus and that, in a way, this year’s event is a “transition edition,” where “we are going to go from predicting networks to predicting behavior using networks.”

“One of the things that has emerged in the last few years is that being able to reconstruct the interactions in cells is not meaningful per se,” he said. “What is meaningful is whether you can use those interactions to then make predictions.”


Have topics you'd like to see covered in BioInform? Contact the editor at uthomas [at] genomeweb [.] com
The Scan

Booster Push

New data shows a decline in SARS-CoV-2 vaccine efficacy over time, which the New York Times says Pfizer is using to argue its case for a booster, even as the lower efficacy remains high.

With Help from Mr. Fluffington, PurrhD

Cats could make good study animals for genetic research, the University of Missouri's Leslie Lyons tells the Atlantic.

Man Charged With Threatening to Harm Fauci, Collins

The Hill reports that Thomas Patrick Connally, Jr., was charged with making threats against federal officials.

Nature Papers Present Approach to Find Natural Products, Method to ID Cancer Driver Mutations, More

In Nature this week: combination of cryogenic electron microscopy with genome mining helps uncover natural products, driver mutations in cancer, and more.