If you ask five different bioinformaticists what the biggest challenges are in the field today, you’ll get five different answers — a fact borne out by a recent panel discussion hosted by the Swiss House for Advanced Research and Education (SHARE) at the Swiss Consulate in Cambridge, Mass.
The discussion, which took place in SHARE’s sleekly furnished storefront office on March 26, attracted a standing-room-only crowd of local students and biotech industry employees eager to pick the brains of five bioinformaticists from the private and public sectors. While questions ran the gamut — from “What is bioinformatics?” to “Is there any effort underway to develop more accurate force field models than Charmm and Amber?” — there was one topic that everyone was happy to weigh in on: “Which areas cause the most pain for bioinformatics today?”
Not surprisingly, data integration remains the bane of bioinformatics. Manuel Peitsch, head of informatics and knowledge management at Novartis, noted that even though data is now being integrated within single research fields, the challenge of integrating between vertical domains such as biology and chemistry remains unsolved. “The problem is not an IT problem,” Peitsch said. “It’s about integration at the level of the data.” The solution to this problem? Accurate and consistent data curation. “It’s underestimated; it’s not fancy; but it’s crucial,” he said.
For Charles DeLisi of the Biomolecular Systems Laboratory at Boston University, the biggest problem area right now is quality control in high-throughput experimental techniques like microarrays and yeast-two-hybrid studies. These methods still create far too many false positives, he noted, “and we need to develop processing methods to improve our confidence so we don’t throw away 90 percent of our data.”
Bruno Sobral, director of the Virginia Bioinformatics Institute, agreed that data quality is important, but added that data management is also an unsolved problem. “We need open standards for transmitting biological data,” he said. While noting that several standards bodies and other organizations are currently working on a number of standards for bioinformatics data, Sobral warned that these approaches might hit a stumbling block in the much more complicated area of semantic integration.
Ioannis Xenarios, head of bioanalysis at Serono, added text-mining technology to the growing wish list. Even if data is effectively linked among databases, he pointed out, the interpretation and annotation that accompanies most data is still in the form of free text, and often inaccessible. Ontologies should help in this area, he said, but a great amount of work remains to be done.
For Ernest Feytmans, director of the Swiss Institute of Bioinformatics, it all comes down to old-fashioned number crunching. “There are still many problems that require tremendous computing power, NP-complete problems,” he said. Not only are “better heuristics” required to tackle these pesky puzzles, but improved raw computing capacity will be needed as well. “It’s not only a matter of organizing your data,” he noted.
Despite the challenges ahead, the panel participants were positive about the progress made in the field so far. Responding to a question about whether pharmaceutical companies have seen any value from bioinformatics, Novartis’ Peitsch offered an enthusiastic “Yes!” Said Peitsch, “We couldn’t do genome sequencing at all without bioinformatics.” DeLisi remarked that while industry has not yet seen measurable results from bioinformatics in the form of beefed-up drug pipelines, the academic community is already seeing cultural benefits from the increasingly interdisciplinary nature of the field. “The cultural revolution is laying seeds for the medical revolution, but it will still take 10-15 years before we see new drugs and targets from bioinformatics,” he said.