Skip to main content
Premium Trial:

Request an Annual Quote

Science and Technology Trends Promise A Wild New Year for Bioinformatics

Premium

2001 was a tough year for most commercial bioinformatics efforts, but while Wall Street may have lost some of its enthusiasm for the sector, practitioners see plenty of opportunity for growth in 2002 as the field adapts to new developments in science and technology.

It appears that the hype over the Human Genome Project was partly to blame for the bioinformatics bubble that grew over the course of 1999 and 2000 and eventually burst last year. But while the capital markets may have reacted swiftly and harshly to the apparent slow rate of return on that investment, most industry insiders say that the best is still yet to come for bioinformatics.

“There is a lot of work in bioinformatics that has gone into infrastructure of various sorts — data back ends, processing the human genome, finding all the genes,” said Steve Lincoln, vice president of product development at InforMax. Looking ahead to 2002, Lincoln said, “There’s going to be a lot more focus on tools for leveraging that infrastructure … to really positively impact the process of discovery research and development.”

Even as bioinformatics firms tighten their belts and modify their business models to keep pace in the changing economic environment, advances in functional genomics, proteomics, and predictive modeling promise to ensure increasing demand for better bioinformatics tools. Trends in computer science and IT will bring even more opportunities for those willing to apply new approaches to the computational challenges of biology — along with a steady stream of new players in the field.

 

THE GRASS IS GREENER ON BIOLOGY’S SIDE OF THE FENCE

 

Any doubts about the commercial promise of life sciences computing should be squelched by a quick rundown of IT companies who jumped into the sector in 2001. IBM, Compaq, and Sun continued to compete for a share of the growing high-performance computing market in the life sciences, and a new wave of IT vendors began to seriously court the life sciences over the course of the year as other vertical markets witnessed slowing growth.

Based largely on the predictions of market research firms such as Frost & Sullivan, which recently estimated that US bioinformatics companies would generate a market value of $6.9 billion by 2007, big IT players such as Microsoft, EMC, Hewlett-Packard, and Oracle cited the life sciences as a key future growth area during the course of the year. Look for more noise from these firms and others as competition heats up over 2002.

In addition, smaller software firms that had catered to the oil and gas, financial services, or telecommunications industries set their sights on bioinformatics as their best bet for survival in 2001. While the economic climate hasn’t been kind to bioinformatics firms and continued consolidation is a given for existing players, that shouldn’t stop new competition from jumping the fence into seemingly greener pastures as opportunities dry up even quicker elsewhere.

One segment of the IT industry that has particularly high hopes for the life sciences is the still-nascent distributed computing field. Over the course of 2001, Entropia, Parabon, United Devices, Platform Computing, Avaki, and others pinned their hopes for grid technology on the computational demands of the biopharmaceutical sector. Only a few such customers have yet to adopt the approach, so 2002 will be a make-or-break year for the technology and for these firms.

 

THE KILLER APP

 

On the software side, future market success will largely depend on the ability to deliver the next set of must-have bioinformatics tools. Now that the databases and sequence analysis tools that have traditionally supported the industry are essentially commodities, everyone is placing bets on what the next big thing will be.

Over the course of 2001, microarray analysis technology matured into a full-scale subdiscipline of bioinformatics. What will 2002 bring? Clearly, functional genomics and proteomics are next on the agenda, but the computational demands of these areas are not yet well-defined.

Russ Altman, director of Stanford University’s medical informatics laboratroy, noted that the surge in microarray data has led directly to an increased interest in literature analysis, clustering algorithms, and probabilistic stochastic algorithms for assembling networks of gene interactions and pathways.

Richard Dweck, CEO of bioinformatics consulting firm 3rd Millennium, agreed that pathways would be big in 2002. “A year and a half ago you never even heard the word,” he said. “Now it’s a matter of getting all the data you need to put together reliable models of biological pathways.”

The push toward pathways and predictive modeling has led many in the industry to cite companies such as Physiome and Entelos as the ones to watch over the next year, but the lack of complete data in this area has led others to back proteomics as a more realistic area for short-term success. Not only is there immediate demand for the storage and analysis of mass spec and 2D gel data, but the tools created for microarray analysis can be applied to protein arrays just as easily as gene chips, giving developers working in this area a rare leg-up on that emerging technology.

“Proteomics is going to be huge,” said Steve Gardner, CTO of Viaken Systems. “We’re finally going to get to a point where we can bring together two of the most powerful technologies around for doing drug discovery — structure-based drug design and bioinformatics.”

But microarray analysis isn’t falling by the wayside yet. The great progress made in the past year in this area has only served to demonstrate how much room remains for improvement, particularly in statistical analysis. Competition in this area will continue to heat up over the next year as new firms pushing novel clustering algorithms continue to surface.

And the field may have overcome one important technological hurdle in 2001 that should ease further progress in 2002: Standardization efforts within bioinformatics are gaining momentum, and should enable greater interoperability as the amount and types of biological data explodes. From the formation of the industry-wide Interoperable Informatics Infrastructure Consortium to the MIAME standard for microarray experiments and the growing acceptance of the Gene Ontology Consortium, bioinformatics practitioners signaled a willingness to make sure they were speaking the same language in 2001.

“We’ve had a lot more progress in the development of standards in the last year than we probably have in all of the 5 or 10 years before that — in terms of standards that will stick and be usable,” said Gardner. 2002 will prove whether these efforts will indeed stick, but most observers are optimistic.

 

THE NEXT GENERATION

 

One area of bioinformatics that did witness a boom in 2001 was education, as both new university degree programs and less formal programs such as the S-Star project sprang up to meet the demand for trained bioinformaticists in industry. This trend will certainly continue into the coming year as a growing number of institutions plan to add bioinformatics degree programs.

These new programs will have to face some questions about the core curriculum for bioinformatics, according to Altman. “There’s emerged the idea that there’s two types of people we need to train — tool builders and tool users,” he said. “Now groups are starting to define

which of those groups they’re interested in training because the training is very different.” University programs should begin refining their curricula to meet this distinction over the next year.

But while demand for good bioinformaticists is still high, the scarcity witnessed in the past may be abating, largely due to the slowing job market for computer scientists.

“The best people are available again,” said Dweck. “We’re getting some good candidates in the door. People coming in are really hungry to get back into doing something exciting to them and an area that has a future.”

However, Altman noted, while demand may have slowed a bit, this shouldn’t dissuade students from pursuing formal training. “If you love this field and you go in for the training, you’re not going to have a problem getting a job,” he said.

— BT

Filed under

The Scan

Back as Director

A court has reinstated Nicole Boivin as director of the Max Planck Institute for the Science of Human History, Science reports.

Research, But Implementation?

Francis Collins reflects on his years as the director of the US National Institutes of Health with NPR.

For the False Negatives

The Guardian writes that the UK Health Security Agency is considering legal action against the lab that reported thousands of false negative COVID-19 test results.

Genome Biology Papers Present Epigenetics Benchmarking Resource, Genomic Architecture Maps of Peanuts, More

In Genome Biology this week: DNA methylation data for seven reference cell lines, three-dimensional genome architecture maps of peanut lines, and more.