Skip to main content
Premium Trial:

Request an Annual Quote

Genedata to Tackle HCS with Evotec; Claims it Can Move Imaging Screens Into HTS Territory


GENEVA — Perhaps a name change is in order for Genedata.

The Swiss informatics firm, whose name reflects its roots in designing software for microarray data analysis, continues to expand its products to address all aspects of the drug-discovery process, including high-throughput biochemical screens, mass spectrometry, and now, high-content screening.

Genedata announced last week a strategic partnership with German HCS and HTS vendor Evotec Technologies, in which Evotec's EVOScreen and plate::explorer readers and Acapella image-analysis software will be co-marketed with Genedata's Screener data analysis software.

In addition, Stephan Heyse, project leader for the Genedata Screener software, delivered a presentation at the Society for Biomolecular Screening conference held here last week detailing how Screener could also be used in combination with Evotec's Opera high-content screening instrument — which can be integrated into the EVOScreen platform — to cut the time it takes to perform large-scale image-based screens to hours, from days or weeks.

"It's not how fast the computer is running, it's how fast your head is running. If you know what you're looking for, then I believe the computer can give you answers quickly. But these guys, who are biologists, and not information processors, said 'a couple of days,' and the information processing professional [Heyse] said 'a couple of hours,' so I believe them both."

Genedata's approach to increasing the speed of high-content image-based screens is to "systematically analyze as much information as possible, because these are very information-rich screens," Heyse told CBA News in an interview following his presentation.

Put simply, the approach would use algorithms that help cancel some of the redundancies that result when data is extracted from thousands upon thousands of acquired images, thereby leaving only the data that is relevant to the particular question being asked by a user.

"Ideally you would do this on a complete image level, but that is simply not feasible," Heyse said. "So there is first a data-reduction step. If you are doing image analysis, then you potentially have a lot of parameters. Ideally you encapsulate all this information for making final decisions, for example, to classify good hits versus bad hits.

"But if you need to reduce the data further for easy understanding and easy decision making, then you can sort of score your parameters that you extract from the images by choosing the optimal ones for your question," Heyse added.

In his talk, Heyse also referenced a recent paper published in Science by Zachary Perlman and colleagues at the Harvard Medical School's Institute of Chemistry and Cell Biology and Department of Systems Biology, which suggesed that large sets of unbiased measurements might serve as high-dimensional cytological profiles analogous to transcriptional profiles in drug screening applications (see CBA News, 11/23/2004). Although Genedata's analysis method differs from this, it is a different means to the same end: moving high-content analysis into HTS speeds.

Heyse's talk generated a lot of interest from the crowd, and understandably so, as scientists have been looking for ways to increase the speed of high-content screening for as long as it has been in existence, in order to move it from the realm of secondary screening, target validation, and toxicology studies to primary screening.

Several company representatives approached Heyse following his presentation to express their interest in Genedata's software; however, the talk also generated significant skepticism among Heyse's peers.

John Dunne, associate scientific director for BD Biosciences, questioned Heyse about the practicality of the approach. While he was impressed with the amount of data that could be condensed, he asked Heyse how long it would take a screening scientist to use the software to conduct such a thorough analysis. When Heyse replied several hours, Dunne — and a few others in the room — were skeptical.

"Part of the question is 'How long is it going to take?'" Dunne told CBA News in an interview following the presentation. "And that's moderated in part by how expert the user is. He could probably do it much more quickly than I could, but the question is 'How quickly can I do it?'"

Dunne conceded that immediately following Heyse's talk, he queried some of his colleagues from HTS labs at pharmaceutical companies who confirmed that the approach was indeed fast — but not quite as fast as Heyse made it out to be.

"They told me that it would take a couple of days to generate all the information that he showed," Dunne said. "That was still surprising — I was thinking more like many months.

"There's a lot associated with thinking," Dunne added. "It's not how fast the computer is running, it's how fast your head is running. If you know what you're looking for, then I believe the computer can give you answers quickly. But these guys, who are biologists, and not information processors, said 'a couple of days,' and the information processing professional [Heyse] said 'a couple of hours,' so I believe them both."

Heyse further supported his claims following his presentation, and seemed to understand where Dunne and others might be coming from in their skepticism.

"You can encapsulate a lot of the complexity under the hood, basically, and that's what we're trying to do with this approach," he told CBA News. "Of course, you have two different interests. There are the end users, who want a simple revelation — not throwing away the information, but simply a relation of all the data in a sensible way.

"And then you have the experts in companies who want to dive into data, look for new options, et cetera," Heyse added. "So somehow we have to satisfy both, so there are some workflows that are simple enough for end users, but there are some options for the experts where they can fine-tune the algorithms."

Genedata is currently attempting to address this situation, and the Screener software, as it applies to high-content screening, will likely further evolve, Heyse said.

If Genedata does succeed in designing a data analysis method that could water down high-content screens into a day's work, then the result could be a boon for the informatics firm, as well as for associated partners.

As it turns out, there may be other partners in this area down the road for Genedata. Heyse said that although the firm will focus on its co-marketing agreement with Evotec in order to provide customers with complete HCS and HTS packages, from image acquisition to analysis to data management. "But of course, if they have other [HCS] instruments, it will be feasible too," Heyse said.

— Ben Butkus ([email protected])

The Scan

Germline-Targeting HIV Vaccine Shows Promise in Phase I Trial

A National Institutes of Health-led team reports in Science that a broadly neutralizing antibody HIV vaccine induced bnAb precursors in 97 percent of those given the vaccine.

Study Uncovers Genetic Mutation in Childhood Glaucoma

A study in the Journal of Clinical Investigation ties a heterozygous missense variant in thrombospondin 1 to childhood glaucoma.

Gene Co-Expression Database for Humans, Model Organisms Gets Update

GeneFriends has been updated to include gene and transcript co-expression networks based on RNA-seq data from 46,475 human and 34,322 mouse samples, a new paper in Nucleic Acids Research says.

New Study Investigates Genomics of Fanconi Anemia Repair Pathway in Cancer

A Rockefeller University team reports in Nature that FA repair deficiency leads to structural variants that can contribute to genomic instability.